|
Something like this should work:
WITH cteTally (N) As
(
SELECT TOP 21
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1
FROM
sys.objects
),
cteDays (TheDate) As
(
SELECT
DATEADD(day, -N, GetUtcDate())
FROM
cteTally
)
SELECT
TheDate
FROM
cteDays As D
WHERE
((DATEPART(dw, TheDate) + @@DATEFIRST) % 7) Not In (0, 1)
And
Not Exists
(
SELECT 1
FROM table1 As T
WHERE T.inserteddate = D.TheDate
)
ORDER BY
TheDate DESC
;
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Is there any benefit to the CTE over a temp table. I consider the temp table to be much more readable but if there was a significant performance benefit I will need to reconsider!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
For 21 rows, there probably won't be much difference in performance between a CTE and a temp table.
I prefer the CTE solution because it's a single statement. With a temp table, you need to check if the table exists, create it, populate it, run your query, and drop it again.
Of course, if you've already got a tally table in your DB, then you could use that instead. It might even be worth creating a table of dates, so that you can mark holidays and other closures as well as weekends.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
For simple dates I have a view that goes from start of previous year to + 10 years but for some apps I have a table Holiday with all the public holidays of various countries we deal with.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
how to display employee name, department name and salary where department number 10 ?
|
|
|
|
|
Display it where? On the screen, a form, the console, printer ... ?
|
|
|
|
|
SELECT employeename, departmentname, salary FROM employees WHERE departmentno = 10;
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Can somebody tell me why the following isn't working?
UPDATE JobTicket_024 SET JobTicket_024.file = Replace([file_link],"\\52qjkl-as-004p","F:\CE\CE\CEDrawings");
I stole this from a previous update I did to the same table.
Now I'm just changing the path again.
It updates the field "file" with "\\qjkl-as-004p" not the new path. What gives?
Thanks in advance.
Windows 7
MS Office 10 (32 bit)
Access
64bit machine
modified 7-May-15 10:06am.
|
|
|
|
|
You are asking sql to look at the file_link column, find "\52qjkl-as-004p" and replace it with "F:\CE\CE\CEDrawings" and put the result into the file column.
Perhaps a before and after picture would help.
However, instead of UPDATE just change to SELECT so you can see what it will do.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
SELECT JobTicket_024.file_link, JobTicket_024.Project_Number
FROM JobTicket_024
WHERE (((JobTicket_024.file_link)=Replace([JobTicket_024]![file_link],"'\\52qjkl-as-004v\CE_Data\CE\CEC\oce_drawings'","'F:\CE\CE\CE Drawings'")));
OK, first I changed UPDATE TO SELECT per your suggestion and I changed it to look at the "file_link" column and replace the "file_link"
Interestingly, the query shows a table and then an error message pops up with "Data type mismatch in criteria expression" and as soon as I click "help" it clears the table. And, the field isn't updated either.
|
|
|
|
|
Don't put it in the where clause. Use the SELECT statement as a preview of what you want the update to do.
For example:
SELECT file, file_link, Replace([file_link],'\52qjkl-as-04v\CE_Data\CE\CEC\oce_drawings','F:\CE\CE\CE Drawings') AS NewFile
FROM JobTicket_024
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Thanks RyanDev, but still a no go. I copy/pasted your code to make sure I did not have any typos.
It creates the new column, NewFile, but there were no changes to the resulting data. It just copied the old column to the new column verbatim.
|
|
|
|
|
I know there were no changes. It is a SELECT statement, not an update statement. Perhaps run it again and paste some of the results here so we can see.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
F#*#k
There was a typo in my select statement. That's why it wasn't finding anything in the search/select column.
Sorry for the troubles.
Thanks for helping.
|
|
|
|
|
No problem. Glad you found it.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I am using Qlik to writeback to the database (Sybase IQ), however, the insert query fails after inserting 3500 rows stating "access violation." but the 3500 rows get inserted into the target table.
I have verified that the data beyond 3500 rows also gets inserted if I run it for 3500 to 7000th rows, so it cannot be a data issue.
I have looked around for this error, and tried setting the BufferCacheSize, PrefetchRows, EnableBulkInsert etc properties, but to no help. Also, if I execute the query with 1000 rows for a single insert, it fails with same error. If, however, I try to insert the same data 100 or 500 rows at a time, the load succeeds.
Is there a limitation with how much data the Sybase IQ ODBC Driver can handle for an open connection? Any ideas/solutions would be much appreciated.
Thanks.
modified 7-May-15 5:30am.
|
|
|
|
|
Ankit RS wrote: Any ideas/solutions would be much appreciated. It sure does sound like you are hitting some sort of memory threshold. I'd suggest batching your records in smaller amounts.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I second RyanDev's suggestion. I've seen a benchmark somewhere, comparing the insert speed depending on the batch size and it had diminishing returns starting around 50. I think it was for SQL Server but I'd imagine it's similar for other DBMS'. Which would mean you won't gain much anyway with a batch size of 1000.
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
|
|
|
|
|
Thanks Ryan and Sascha.
I think I figured out the issue: it most certainly looks to be a threshold issue. On executing an insert inside a loop for 1 to 2000 in the query editor, Sybase throws up a pop-up stating "Maximum number of entries for the editor is 1000 and you should remove successful logs from the log file." Paraphrased. There are options to either remove or keep the successful entries from the log. See attached snapshot.
Surely, this is what's causing the insert to fail, when executed through Qlik, as there's no handler for the pop-up.
(Funny I can't locate the attach option here for the snapshot)
Edit: Turns out the solution is: 1) keep the number of rows in the insert less than 1000 for each execution (loop or otherwise), and 2) include the begin and commit the transaction statements as a part of the query itself instead of using the connection.BeginTrans and connection.CommitTrans in the VBScript code (which is what I was doing initially.)
modified 8-May-15 7:07am.
|
|
|
|
|
Glad you solved it! Cheers
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
|
|
|
|
|
I got myself in a pickle here.
I started letting customers delete items in the store. But I copied the item into a separate table.
I should of added a column status instead.
But anyways, on my report for total items sold in the year, I did a join on the product table, to get the current cost, images, etc.
So now that some items have been deleted, the record is no longer in the product table, but in the recycle table.
I tried a union all, but it results in 2 records, in which the 2nd is null.
I tried just union as well, but it produces an reader error.
I'm not sure how to proceed on this, or what to words to use to do a search here.
I'm stumped!
[EDIT]
Maybe I should of done a if exist around the join, and just switch tables.
Dim query As String = _
"DECLARE @startDate AS DATE; " & _
"DECLARE @stopDate AS DATE; " & _
"SET @startDate = DATETIMEFROMPARTS(@Year, 1, 1, 0, 0, 0, 0); " & _
"SET @stopDate = DATETIMEFROMPARTS(@Year, @Month, @Day, 23, 59, 59, 999); "
query +=
"SELECT " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" SUM(coH.Qty) as TotalQty, " & _
" CAST(SUM(coH.Qty * coh.Cost) AS Decimal(10,2)) as TotalCost, " & _
" CAST(SUM(coH.Qty * coh.Price) AS Decimal(10,2)) as TotalPrice, " & _
" pI.Cost, " & _
" pI.Price, " & _
" pI.ShortDescription, " & _
" pI.LongDescription, " & _
" pI.PostageImage, " & _
" pI.Thumbnail, " & _
" pI.VendorID, " & _
" pI.VendorName, " & _
" pI.Weight, " & _
" pI.ShipHeight, " & _
" pI.ShipWidth, " & _
" pI.ShipDepth, " & _
" pI.LimitedItem, " & _
" vI.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone " & _
" FROM CompletedOrdersCartHistory as coH " & _
" LEFT JOIN PRODUCTINFO AS pI " & _
" ON coH.PartNumber = pI.PartNumber " & _
" LEFT JOIN VendorInfo AS vI " & _
" ON pI.VendorID = vI.VendorID " & _
" WHERE coh.OrderDate > @startDate " & _
" AND coh.OrderDate < @stopDate " & _
" AND coh.PartNumber = @PartNumber " & _
" GROUP BY " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" pI.Cost, " & _
" pI.Price, " & _
" pI.ShortDescription, " & _
" pI.LongDescription, " & _
" pI.PostageImage, " & _
" pI.Thumbnail, " & _
" pI.VendorID, " & _
" pI.VendorName, " & _
" pI.Weight, " & _
" pI.ShipHeight, " & _
" pI.ShipWidth, " & _
" pI.ShipDepth, " & _
" pI.LimitedItem, " & _
" vi.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone "
query +=
" UNION ALL "
query +=
"SELECT " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" SUM(coH.Qty) as TotalQty, " & _
" CAST(SUM(coH.Qty * coh.Cost) AS Decimal(10,2)) as TotalCost, " & _
" CAST(SUM(coH.Qty * coh.Price) AS Decimal(10,2)) as TotalPrice, " & _
" pIH.Cost, " & _
" pIH.Price, " & _
" pIH.ShortDescription, " & _
" pIH.LongDescription, " & _
" pIH.PostageImage, " & _
" pIH.Thumbnail, " & _
" pIH.VendorID, " & _
" pIH.VendorName, " & _
" pIH.Weight, " & _
" pIH.ShipHeight, " & _
" pIH.ShipWidth, " & _
" pIH.ShipDepth, " & _
" pIH.LimitedItem, " & _
" vI.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone " & _
" FROM CompletedOrdersCartHistory as coH " & _
" LEFT JOIN PRODUCTINFO_RECYCLEBIN AS pIH " & _
" ON coH.PartNumber = pIH.PartNumber " & _
" LEFT JOIN VendorInfo AS vI " & _
" ON pIH.VendorID = vI.VendorID " & _
" WHERE coh.OrderDate > @startDate " & _
" AND coh.OrderDate < @stopDate " & _
" AND coh.PartNumber = @PartNumber " & _
" GROUP BY " & _
" coH.PartNumber, " & _
" coH.ManPartNumber, " & _
" pIH.Cost, " & _
" pIH.Price, " & _
" pIH.ShortDescription, " & _
" pIH.LongDescription, " & _
" pIH.PostageImage, " & _
" pIH.Thumbnail, " & _
" pIH.VendorID, " & _
" pIH.VendorName, " & _
" pIH.Weight, " & _
" pIH.ShipHeight, " & _
" pIH.ShipWidth, " & _
" pIH.ShipDepth, " & _
" pIH.LimitedItem, " & _
" vi.Address1, " & _
" vI.Address2, " & _
" vI.City, " & _
" vI.StateCode, " & _
" vI.CountryCode, " & _
" vI.ZipCode, " & _
" vI.ContactName, " & _
" vI.VendorPhone "
query +=
"ORDER BY coH.PartNumber "
modified 6-May-15 15:32pm.
|
|
|
|
|
Since you're using a relatively recent version of MS SQL Server, something like this should work:
DECLARE @startDate AS DATE;
DECLARE @stopDate AS DATE;
SET @startDate = DATETIMEFROMPARTS(@Year, 1, 1, 0, 0, 0, 0);
SET @stopDate = DATETIMEFROMPARTS(@Year, @Month, @Day, 23, 59, 59, 999);
WITH cteProductInfo As
(
SELECT
PartNumber,
Cost,
Price,
ShortDescription,
LongDescription,
PostageImage,
Thumbnail,
VendorID,
VendorName,
Weight,
ShipHeight,
ShipWidth,
ShipDepth,
LimitedItem
FROM
PRODUCTINFO
UNION ALL
SELECT
PartNumber,
Cost,
Price,
ShortDescription,
LongDescription,
PostageImage,
Thumbnail,
VendorID,
VendorName,
Weight,
ShipHeight,
ShipWidth,
ShipDepth,
LimitedItem
FROM
PRODUCTINFO_RECYCLEBIN
),
cteOrders As
(
SELECT
PartNumber,
ManPartNumber,
SUM(coH.Qty) as TotalQty,
CAST(SUM(coH.Qty * coh.Cost) AS decimal(10, 2)) as TotalCost,
CAST(SUM(coH.Qty * coh.Price) AS decimal(10, 2)) as TotalPrice
FROM
CompletedOrdersCartHistory
GROUP BY
PartNumber,
ManPartNumber
)
SELECT
coH.PartNumber,
coH.ManPartNumber,
coH.TotalQty,
coH.TotalCost,
coH.TotalPrice,
pI.Cost,
pI.Price,
pI.ShortDescription,
pI.LongDescription,
pI.PostageImage,
pI.Thumbnail,
pI.VendorID,
pI.VendorName,
pI.Weight,
pI.ShipHeight,
pI.ShipWidth,
pI.ShipDepth,
pI.LimitedItem,
vI.Address1,
vI.Address2,
vI.City,
vI.StateCode,
vI.CountryCode,
vI.ZipCode,
vI.ContactName,
vI.VendorPhone
FROM
cteOrders as coH
LEFT JOIN cteProductInfo AS pI
ON coH.PartNumber = pI.PartNumber
LEFT JOIN VendorInfo AS vI
ON pI.VendorID = vI.VendorID
WHERE
coh.OrderDate > @startDate
AND
coh.OrderDate < @stopDate
AND
coh.PartNumber = @PartNumber
ORDER BY
coH.PartNumber
;
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
That looks interesting!
Ohhhhh, so that's how you union all the productinfo and productinfo_recyclebin together.
Yes I do understand the SQL now.
Wow, thanks for taking the time to look at it. I know it was a lot of code to look at
and could get confusing.
I'll give it a spin tomorrow.
Thanks Richard!
|
|
|
|
|
Worked like a charm!
Thanks!
Had to adjust a couple of things, but that got my mind into better understanding the SQL sample
|
|
|
|
|
So I managed to run into the error:
"ERROR: database is not accepting commands to avoid wraparound data loss in database "mydb" HINT: Stop the postmaster and use a standalone backend to vacuum that database. You might also need to commit or roll back old prepared transactions."
I followed google into the solution of restarting the database in standalone mode (like here[^]), but when I run:
VACUUM FULL mytable;
it gives me the message:
backend> VACUUM FULL mytable;
2015-05-06 09:45:09 UTC WARNING: database "mydb" must be vacuumed within 999976 transactions
2015-05-06 09:45:09 UTC HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
2015-05-06 09:47:53 UTC WARNING: database "mydb" must be vacuumed within 999975 transactions
2015-05-06 09:47:53 UTC HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
2015-05-06 09:47:53 UTC WARNING: database "mydb" must be vacuumed within 999974 transactions
2015-05-06 09:47:53 UTC HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
backend>
I have no idea what I can do with this.. Just continue the VACUUM for all tables?
thanks.
|
|
|
|
|