|
Sorry my data was bad. Hopefully this makes more sense:
1. The time a job takes for a person to close is the difference between when a job is assigned to a user and when that user closes the job.
2. A job can be re-opened which I guess means I need to use MIN and MAX for a given job.
Here's better data:
JobID User TaskID Task AuditDatetime
17 Bob 2 Opens job 1/8/2014 11:14:29
17 Bob 5 Closes job 2/8/2014 10:12:26
22 Joe 2 Opens job 2/8/2014 11:01:45
22 Bob 5 Closes job 3/8/2014 11:44:22
25 Bob 1 Opens job 4/8/2014 11:40:22
25 Joe 3 Assigned job 5/8/2014 11:47:22
25 Joe 5 Closes job 5/8/2014 11:49:22
The jobID is a request made to the service desk. The TaskID can be opens, assigns, closes.
I'll only be able to report on jobs closed too I suppose rather than open jobs too I guess? This is actually pretty complicated as there's also on hold items but I will forget about these for now.
|
|
|
|
|
No problem, if that's the case then perhaps something like
SELECT iv.JobId,
iv.OpenedBy,
iv.ClosedBy,
iv.OpenTime,
iv.CloseTime,
DATEDIFF(hh, iv.OpenTime, iv.CloseTime) AS DurationInHours
FROM (
SELECT DISTINCT
tt.JobId AS JobId,
tt_open.User AS OpenedBy,
tt_close.User AS ClosedBy,
MIN(tt_open.AuditDatetime) OVER (PARTITION BY tt_open.JobId ORDER BY tt_open.AuditDatetime) AS OpenTime,
MAX(tt_close.AuditDatetime) OVER (PARTITION BY tt_close.JobId ORDER BY tt_close.AuditDatetime) AS CloseTime
FROM TheTable tt_open LEFT OUTER JOIN TheTable tt_close
ON tt_open.JobId = tt_close.JobId
WHERE tt_open.TaskId IN (1,2)
AND tt_close.TaskId = 5) iv
Again, sorry about potential typos 
|
|
|
|
|
That's perfect and brilliant, thank you very much!
|
|
|
|
|
Glad it helped 
|
|
|
|
|
Hello All,
I'm very much interested in learning SSIS and SSAS.
It would be a great help to me if someone can please guide me about books to follow/read to be an expert in SSIS and SSAS.
Thanks & Regards,
Arun
|
|
|
|
|
|
|
I have this report from a customer that gives away product samples, and uses 0.00 as the price.
So I'm suppose to calculate the cost and the price, but there always different during the year.
Anyways; when I calculate this one item, in which they sold 100, but 90 of them were free, the numbers skew and don't make sense. So I figured perhaps I'll create 2 result sets, 1 for no free items and 1 for all free items.
What would you do?, 2 select statements and the reader will return 2 result sets, I'm not sure how to proceed here on this one.
" SELECT " & _
" v.FITEMNO " & _
", SUM(v.FSHIPQTY) " & _
", SUM(v.FCOST) / SUM(v.FSHIPQTY) " & _
", SUM(v.FPRICE) / SUM(v.FSHIPQTY) " & _
", (SELECT FDESCRIPT FROM ICITM01.dbf WHERE FITEMNO = v.FITEMNO) AS FREALDESC " & _
", SUM(v.FSHIPQTY * v.FPRICE - v.FSHIPQTY * v.FCOST)" & _
" FROM ARTRS01H.dbf v " & _
" WHERE " & _
" v.FSHIPDATE >= @startDate AND v.FSHIPDATE <= @stopDate " & _
" AND " & _
" FITEMNO = @FITEMNO " & _
" AND NOT " & _
" FPRICE = 0.00 "
|
|
|
|
|
I think you are on the right track.
Create 2 queries but use a union statement which will combine the result sets into one.
Good luck. 
|
|
|
|
|
Pls. tell me about good and interesting book to clear the concept of Sql,Dynamic Procedures and User defined Functions with good examples. 
|
|
|
|
|
I do not read technical books anymore because information is much easier to find online now. I know some people still read books and it does have its place but I would suggest you google for some examples.
Often when I google how to do something in sql this blog is in the result set. http://blog.sqlauthority.com/[^] It is very well written. I highly recommend going through it.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I have table looks like below.
EMPID QID QDesc Reason Sub Reason Remarks
201 1 Question 1 Reason1 Break Taking break
201 2 Question 2 Reason2 Poor
201 2 Question 2 Reason3 Good
201 2 Question 2 Reason4 Average
201 2 Question 2 Reason5 Good
201 2 Question 2 Reason6 Average
201 2 Question 2 Reason7 Average
201 2 Question 2 Reason8 Average
201 3 Question 3 NA
201 4 Question 4 Reason9
201 5 Question 5 Reason10 Yes
Required Output from the above table should be in the below format. Please help.
Emp ID Question 1 Question 2 Question 3 Question 4 Question 5 Remarks
Reason Sub reason Remarks Reason2 Reason3 to Reason8
modified 15-Oct-14 2:25am.
|
|
|
|
|
This article [^]might help you!
You are looking at a PIVOT table.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
So I have 3 Tables
OrderInfo - Unique records
ShipRates - Unique records
OrderInfo_Emails - A list of non-unique records
There are 3 records in orderinfo, but I get 4 results back, because orderinfo_email has 2 records with the same orderinfo id. So I get that, why I get 4 back and not 3
Should I just drop the OrderInfo_Email Join and do something else, some other way, or is there a way to limit the results from the join?
SELECT
oi.OrderInfoID
, oi.OrderDate
, oi.OrderNum
, oi.LoginID
, oi.OrderStatus
, oi.AuthorizationID
, oi.ApprovalCode
, oi.Shipping
, oi.PaymentInfo
, oi.SubTotal
, oi.GrandTotal
, oi.RateAPICode
, oi.TotalNetCharge
, srr.RateName
, oie.MessageID
, oie.DateSent
, oie.SentBy
FROM OrderInfo oi
LEFT JOIN ORDERINFO_EMAIL oie
ON (oie.OrderID = oi.OrderInfoID )
LEFT JOIN Shipping_RealTime_Rates srr
ON (srr.RateAPICode = oi.RateAPICode)
ORDER BY oi.OrderDate DESC
|
|
|
|
|
How do you pick the one you want? What the criteria?
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
Oh, the first one is fine,
It's for Abandoned Orders, in which a email is sent to the customer as a reminder. You can send multiple emails to the customer, in which the emails are logged in ORDERINFO_EMAIL, So When you load the Abandoned Orders, if a email record record exist, a little envelope icon appears.
|
|
|
|
|
You may try this:
SELECT
oi.OrderInfoID
, oi.OrderDate
, oi.OrderNum
, oi.LoginID
, oi.OrderStatus
, oi.AuthorizationID
, oi.ApprovalCode
, oi.Shipping
, oi.PaymentInfo
, oi.SubTotal
, oi.GrandTotal
, oi.RateAPICode
, oi.TotalNetCharge
, srr.RateName
, (select count(*) from ORDERINFO_EMAIL oie where oie.OrderID = oi.OrderInfoID) AS email_sent
FROM OrderInfo oi
LEFT JOIN Shipping_RealTime_Rates srr
ON (srr.RateAPICode = oi.RateAPICode)
ORDER BY oi.OrderDate DESC
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
I was thinking the same thing to keep it simple
OK, I'm going with that idea.
Thanks!
|
|
|
|
|
When and why this kind of exception occurs ?
If it occurs then why the SQL Server dont show the column name to which this exception belongs ?
|
|
|
|
|
Vijay G. Yadav wrote: why the SQL Server dont show the column name to which this exception belongs
Because SQL server is a PITA with this error. I have yet to find a simple way to debug this error, rabbiting through all the text fields and checking the length is just irritating.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
Vijay G. Yadav wrote: why the SQL Server dont show the column name to which this exception belongs ?
Good question. There's a suggestion to fix this on Connect[^], which has been open since 2008. So far, Microsoft have not responded.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
|
|
A log file has been growing very fast from 2 gig up to 19 gig in a matter of about 4 hours.
The recovery model is set as full.
Even with backing up the transaction log file, every 30 minutes, I have found that the log file is not getting truncated.
I did notice that the file growth stopped as soon as the workday came to a close.
On running DBCC SQLPERF(LOGSPACE) I noticed that the log space was being eaten up fairly quickly. Also on running DBCC OPENTRAN I did not see any really old transactions that had not been commited
In my investigations someone mentioned that if a person was selecting a large volume of data this could make the log file grow. It is quite possible to select in the region of 9million plus rows in a select query of this database.
I was under the impression that the transaction log file only contained updates, deletes and inserts in order to be able to rebuild the database at any point in time. Having googled this I am not any the wiser.
I will run some more diagnostic tests on Monday to see if I can discover any more information if I see the log file growing rapidly again.
So my question is – would a large number of individual select commands, or a high volume of data being selected get logged to the transaction log file?
_____________________________________________
Solved - read below
_____________________________________________
It turns out that a user was running a large report which kept falling over.
They kept restarting the report.
The software is a 3rd party piece of software that appears to write results to some sort of temporary table.
I asked the user to run the report first thing this morning, when the office was very quiet.
Using DBCC SQLPERF(LOGSPACE) I then noticed the log starting to be eaten up again and when the report completed the log stopped growing.
So I think I am going to set up some sort of warning system that monitors log growth over time and alerts me when it starts ballooning as it did on Friday.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
modified 6-Oct-14 8:04am.
|
|
|
|
|