Subject: | Re: Best way to Add from Paradox Table to AWS Postgres Table
| Date: | Thu, 14 May 2020 19:17:49 +1000
| From: | Tom Krieg <REMOVEtomkCAPITALS@sdassociates.com.au>
| Newsgroups: | pnews.paradox-client_server
|
I see where the problem lies. You need to upload the csv file to where
the server can see it. So you need to upload it to the AWS instance.
With my Paradox apps and Postgres, the customer was using the cloud for
all their business apps, including their Paradox app, but they had a
private server running Server 2012 and all the usual Office and MS
bulls**t as well as my Paradox app and the postgresql server. So they
could upload stuff like csv files from manufacturers and have them
update the tables of models and parts held in Postgres. If you can't
upload bulk files to where the server can see them, you have to go the
"one record at a time" route.
If you need to pay extra for that facility, you'll have to, otherwise
put up with lethargic performance.
And my recommendation? Use pass-through sql for the "one record at a
time" update. It'll be faster than any Paradox/BDE/Tcursor/Copyfromarray
mess.
On 11/05/2020 12:03 pm, Tom Krieg wrote:
> What you want is ONE
> transaction with a commit every 500 or 1000 records and you can only do
> that with SQL, not Paradox.
>
|