|
Thanks RyanDev, but still a no go. I copy/pasted your code to make sure I did not have any typos.
It creates the new column, NewFile, but there were no changes to the resulting data. It just copied the old column to the new column verbatim.
|
|
|
|
|
I know there were no changes. It is a SELECT statement, not an update statement. Perhaps run it again and paste some of the results here so we can see.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
F#*#k
There was a typo in my select statement. That's why it wasn't finding anything in the search/select column.
Sorry for the troubles.
Thanks for helping.
|
|
|
|
|
No problem. Glad you found it.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I am using Qlik to writeback to the database (Sybase IQ), however, the insert query fails after inserting 3500 rows stating "access violation." but the 3500 rows get inserted into the target table.
I have verified that the data beyond 3500 rows also gets inserted if I run it for 3500 to 7000th rows, so it cannot be a data issue.
I have looked around for this error, and tried setting the BufferCacheSize, PrefetchRows, EnableBulkInsert etc properties, but to no help. Also, if I execute the query with 1000 rows for a single insert, it fails with same error. If, however, I try to insert the same data 100 or 500 rows at a time, the load succeeds.
Is there a limitation with how much data the Sybase IQ ODBC Driver can handle for an open connection? Any ideas/solutions would be much appreciated.
Thanks.
modified 7-May-15 5:30am.
|
|
|
|
|
Ankit RS wrote: Any ideas/solutions would be much appreciated. It sure does sound like you are hitting some sort of memory threshold. I'd suggest batching your records in smaller amounts.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I second RyanDev's suggestion. I've seen a benchmark somewhere, comparing the insert speed depending on the batch size and it had diminishing returns starting around 50. I think it was for SQL Server but I'd imagine it's similar for other DBMS'. Which would mean you won't gain much anyway with a batch size of 1000.
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
|
|
|
|
|
Thanks Ryan and Sascha.
I think I figured out the issue: it most certainly looks to be a threshold issue. On executing an insert inside a loop for 1 to 2000 in the query editor, Sybase throws up a pop-up stating "Maximum number of entries for the editor is 1000 and you should remove successful logs from the log file." Paraphrased. There are options to either remove or keep the successful entries from the log. See attached snapshot.
Surely, this is what's causing the insert to fail, when executed through Qlik, as there's no handler for the pop-up.
(Funny I can't locate the attach option here for the snapshot)
Edit: Turns out the solution is: 1) keep the number of rows in the insert less than 1000 for each execution (loop or otherwise), and 2) include the begin and commit the transaction statements as a part of the query itself instead of using the connection.BeginTrans and connection.CommitTrans in the VBScript code (which is what I was doing initially.)
modified 8-May-15 7:07am.
|
|
|
|
|
Glad you solved it! Cheers
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
|
|
|
|
|
I got myself in a pickle here.
I started letting customers delete items in the store. But I copied the item into a separate table.
I should of added a column status instead.
But anyways, on my report for total items sold in the year, I did a join on the product table, to get the current cost, images, etc.
So now that some items have been deleted, the record is no longer in the product table, but in the recycle table.
I tried a union all, but it results in 2 records, in which the 2nd is null.
I tried just union as well, but it produces an reader error.
I'm not sure how to proceed on this, or what to words to use to do a search here.
I'm stumped!
[EDIT]
Maybe I should of done a if exist around the join, and just switch tables.
Dim query As String = _
"DECLARE @startDate AS DATE; " & _
"DECLARE @stopDate AS DATE; " & _
"SET @startDate = DATETIMEFROMPARTS(@Year, 1, 1, 0, 0, 0, 0); " & _
"SET @stopDate = DATETIMEFROMPARTS(@Year, @Month, @Day, 23, 59, 59, 999); "
query +=
"SELECT " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" SUM(coH.Qty) as TotalQty, " & _
" CAST(SUM(coH.Qty * coh.Cost) AS Decimal(10,2)) as TotalCost, " & _
" CAST(SUM(coH.Qty * coh.Price) AS Decimal(10,2)) as TotalPrice, " & _
" pI.Cost, " & _
" pI.Price, " & _
" pI.ShortDescription, " & _
" pI.LongDescription, " & _
" pI.PostageImage, " & _
" pI.Thumbnail, " & _
" pI.VendorID, " & _
" pI.VendorName, " & _
" pI.Weight, " & _
" pI.ShipHeight, " & _
" pI.ShipWidth, " & _
" pI.ShipDepth, " & _
" pI.LimitedItem, " & _
" vI.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone " & _
" FROM CompletedOrdersCartHistory as coH " & _
" LEFT JOIN PRODUCTINFO AS pI " & _
" ON coH.PartNumber = pI.PartNumber " & _
" LEFT JOIN VendorInfo AS vI " & _
" ON pI.VendorID = vI.VendorID " & _
" WHERE coh.OrderDate > @startDate " & _
" AND coh.OrderDate < @stopDate " & _
" AND coh.PartNumber = @PartNumber " & _
" GROUP BY " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" pI.Cost, " & _
" pI.Price, " & _
" pI.ShortDescription, " & _
" pI.LongDescription, " & _
" pI.PostageImage, " & _
" pI.Thumbnail, " & _
" pI.VendorID, " & _
" pI.VendorName, " & _
" pI.Weight, " & _
" pI.ShipHeight, " & _
" pI.ShipWidth, " & _
" pI.ShipDepth, " & _
" pI.LimitedItem, " & _
" vi.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone "
query +=
" UNION ALL "
query +=
"SELECT " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" SUM(coH.Qty) as TotalQty, " & _
" CAST(SUM(coH.Qty * coh.Cost) AS Decimal(10,2)) as TotalCost, " & _
" CAST(SUM(coH.Qty * coh.Price) AS Decimal(10,2)) as TotalPrice, " & _
" pIH.Cost, " & _
" pIH.Price, " & _
" pIH.ShortDescription, " & _
" pIH.LongDescription, " & _
" pIH.PostageImage, " & _
" pIH.Thumbnail, " & _
" pIH.VendorID, " & _
" pIH.VendorName, " & _
" pIH.Weight, " & _
" pIH.ShipHeight, " & _
" pIH.ShipWidth, " & _
" pIH.ShipDepth, " & _
" pIH.LimitedItem, " & _
" vI.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone " & _
" FROM CompletedOrdersCartHistory as coH " & _
" LEFT JOIN PRODUCTINFO_RECYCLEBIN AS pIH " & _
" ON coH.PartNumber = pIH.PartNumber " & _
" LEFT JOIN VendorInfo AS vI " & _
" ON pIH.VendorID = vI.VendorID " & _
" WHERE coh.OrderDate > @startDate " & _
" AND coh.OrderDate < @stopDate " & _
" AND coh.PartNumber = @PartNumber " & _
" GROUP BY " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" pIH.Cost, " & _
" pIH.Price, " & _
" pIH.ShortDescription, " & _
" pIH.LongDescription, " & _
" pIH.PostageImage, " & _
" pIH.Thumbnail, " & _
" pIH.VendorID, " & _
" pIH.VendorName, " & _
" pIH.Weight, " & _
" pIH.ShipHeight, " & _
" pIH.ShipWidth, " & _
" pIH.ShipDepth, " & _
" pIH.LimitedItem, " & _
" vi.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone "
query +=
"ORDER BY coH.PartNumber "
modified 6-May-15 15:32pm.
|
|
|
|
|
Since you're using a relatively recent version of MS SQL Server, something like this should work:
DECLARE @startDate AS DATE;
DECLARE @stopDate AS DATE;
SET @startDate = DATETIMEFROMPARTS(@Year, 1, 1, 0, 0, 0, 0);
SET @stopDate = DATETIMEFROMPARTS(@Year, @Month, @Day, 23, 59, 59, 999);
WITH cteProductInfo As
(
SELECT
PartNumber,
Cost,
Price,
ShortDescription,
LongDescription,
PostageImage,
Thumbnail,
VendorID,
VendorName,
Weight,
ShipHeight,
ShipWidth,
ShipDepth,
LimitedItem
FROM
PRODUCTINFO
UNION ALL
SELECT
PartNumber,
Cost,
Price,
ShortDescription,
LongDescription,
PostageImage,
Thumbnail,
VendorID,
VendorName,
Weight,
ShipHeight,
ShipWidth,
ShipDepth,
LimitedItem
FROM
PRODUCTINFO_RECYCLEBIN
),
cteOrders As
(
SELECT
PartNumber,
ManPartNumber,
SUM(coH.Qty) as TotalQty,
CAST(SUM(coH.Qty * coh.Cost) AS decimal(10, 2)) as TotalCost,
CAST(SUM(coH.Qty * coh.Price) AS decimal(10, 2)) as TotalPrice
FROM
CompletedOrdersCartHistory
GROUP BY
PartNumber,
ManPartNumber
)
SELECT
coH.PartNumber,
coH.ManPartNumber,
coH.TotalQty,
coH.TotalCost,
coH.TotalPrice,
pI.Cost,
pI.Price,
pI.ShortDescription,
pI.LongDescription,
pI.PostageImage,
pI.Thumbnail,
pI.VendorID,
pI.VendorName,
pI.Weight,
pI.ShipHeight,
pI.ShipWidth,
pI.ShipDepth,
pI.LimitedItem,
vI.Address1,
vI.Address2,
vI.City,
vI.StateCode,
vI.CountryCode,
vI.ZipCode,
vI.ContactName,
vI.VendorPhone
FROM
cteOrders as coH
LEFT JOIN cteProductInfo AS pI
ON coH.PartNumber = pI.PartNumber
LEFT JOIN VendorInfo AS vI
ON pI.VendorID = vI.VendorID
WHERE
coh.OrderDate > @startDate
AND
coh.OrderDate < @stopDate
AND
coh.PartNumber = @PartNumber
ORDER BY
coH.PartNumber
;
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
That looks interesting!
Ohhhhh, so that's how you union all the productinfo and productinfo_recyclebin together.
Yes I do understand the SQL now.
Wow, thanks for taking the time to look at it. I know it was a lot of code to look at
and could get confusing.
I'll give it a spin tomorrow.
Thanks Richard!
|
|
|
|
|
Worked like a charm!
Thanks!
Had to adjust a couple of things, but that got my mind into better understanding the SQL sample
|
|
|
|
|
So I managed to run into the error:
"ERROR: database is not accepting commands to avoid wraparound data loss in database "mydb" HINT: Stop the postmaster and use a standalone backend to vacuum that database. You might also need to commit or roll back old prepared transactions."
I followed google into the solution of restarting the database in standalone mode (like here[^]), but when I run:
VACUUM FULL mytable;
it gives me the message:
backend> VACUUM FULL mytable;
2015-05-06 09:45:09 UTC WARNING: database "mydb" must be vacuumed within 999976 transactions
2015-05-06 09:45:09 UTC HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
2015-05-06 09:47:53 UTC WARNING: database "mydb" must be vacuumed within 999975 transactions
2015-05-06 09:47:53 UTC HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
2015-05-06 09:47:53 UTC WARNING: database "mydb" must be vacuumed within 999974 transactions
2015-05-06 09:47:53 UTC HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
backend>
I have no idea what I can do with this.. Just continue the VACUUM for all tables?
thanks.
|
|
|
|
|
It must be said... that really sucks.
|
|
|
|
|
That should be marked as abuse!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
That is why I used the joke icon.
|
|
|
|
|
That's all right. I'm a big boy now.
Not that the joke is funny, my VACUUM command failed and I'm really worrying right now. This is the development database, but I'm really afraid for the production database following any minute.
I did a VACUUM on all tables for production except for one (the largest one) which fails to do anything .
Anyway, continueing the search, with a positive attitude
|
|
|
|
|
I meant Tim's response...
I can't help as I have zero experience with MySql. You might check out if SQLServerCentral helps out with MySql or if there is a specialist site for the database. This sounds like a DBA level problem not a developer issue.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: I meant Tim's response...
I know ...
I am the DBA and developer and analyst and tester and ...
|
|
|
|
|
V. wrote: he DBA and developer and analyst and tester and ... No you are the poor sod having to look after the database, you may have designed, built, populated the data, tuned the indexes and then broke the bloody thing. You are not a DBA, you just have to do that job.
Sorry I can't help!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I've never used Postgres or I would help.
Can you do a backup and compress? Just grasping for help....
|
|
|
|
|
|
It's in the planning to do something like this, but for the immediate problem it won't help me.
But thanks for the effort.
|
|
|
|
|
Hello Experts,
I have been to most of the links provided by Google for the below question. All I get is difference.
For some reason, there is a table has all the columns as Key columns which is kinda weird but I have to deal with this. Let me put this way. My table contains Data with 6 columns and all are Keys. If I want to edit something then I will have to either delete entire row first and Insert it again or do something. So now if we have sequence object, If I edit the row with a extra Item, wont that be a new row? will that also be existing row with same sequence? Please help me in understanding this.
Your help is much appreciated.
Thanks
Happy Coding!
|
|
|
|
|