Click here to Skip to main content
14,923,200 members

Comments by Jörgen Andersson (Top 200 by date)

Jörgen Andersson 14-Apr-21 8:48am View
Your trigger updates the same table that triggers it, that's what's causing the loop.
You need to rethink your approach.
Personally I avoid triggers altogether for everything but logging.
Jörgen Andersson 8-Mar-21 10:23am View
I would suspect so, since he refers to "I have also tried the following EF statement"
Jörgen Andersson 10-Feb-21 3:55am View
When you set a variable you need to use :=
Earlier versions of Oracle's PL/SQL language were based on the Ada language syntax and structure.
Jörgen Andersson 5-Jan-21 6:30am View
Jörgen Andersson 14-Dec-20 4:20am View
Well, actually you can. But since that's the wrong answer for this purpose, I won't submit a link on how to do it. :)
Jörgen Andersson 25-Nov-20 5:49am View
Cascading deletes really is a big NO in my book, and subject of public shaming. Since public flogging seems to be out of fashion not allowed anymore.
Jörgen Andersson 24-Nov-20 6:15am View
I would ask the customer if they had a reason to leave them empty, or if they just didn't care, in which case you can set them to whatever.
Jörgen Andersson 24-Nov-20 5:35am View
A SOAPAction is a URI identifying the "intent". But there are no restrictions on the format of the URI or that it's even real. So it's basically an identifier that has to be unique for every endpoint.
There are circumstances where you don't need it though, but to answer if that's the case we have to little info.
More info here:
Jörgen Andersson 3-Nov-20 2:56am View
What database and version is it?
Jörgen Andersson 29-Sep-20 4:41am View
I'm seeing quite some code smells here, "ISOLATION LEVEL READ UNCOMMITTED", "WITH (NOLOCK)" and Distinct.

What is the reason for the loop?
Jörgen Andersson 21-Sep-20 2:29am View
The 'official' way of doing it is using unpivot, but if you don't care which column your value came from, the method you found should do just fine.
Jörgen Andersson 15-Sep-20 9:00am View
Can you update the query with table aliases for all fields so we know which field belongs to which table?

To me the most likely culprit are the non-sargable replace-functions inside the conditions.
Also your LIKE functions don't have neither '_' nor '%' in them so they work like an equality.
Jörgen Andersson 2-Sep-20 2:16am View
Yes, you can achieve it using a combo of "Islands and Gaps" and Pivot.
There you have your search terms. I have other more urgent stuff to take care of.
Jörgen Andersson 31-Aug-20 4:23am View
Well you won't get an error, but you will get a row for each value of NPACode, not just for NPACode > 0
Jörgen Andersson 31-Aug-20 3:19am View
Well, that probably won't work.
You can't use NPA in the Group by if it's created in the Select.
Jörgen Andersson 12-Aug-20 5:18am View
You cannot, because it's a table variable.
It refers to a table, not a value, or even a row with values.
Now I suggest you read the article you were linked. It's good
Jörgen Andersson 23-Jun-20 15:46pm View
Jörgen Andersson 10-Jun-20 4:41am View
No, the value of GetDate() Cannot change during the query. It's set as a constant at the start of the query.
This can be easily tested using : SELECT somecolumn,GetDate() As MyDateTime INTO MyTestTable FROM MyVeryLargeTable, the value of MyDateTime will be a constant even if the query runs for minutes.

A transaction with several statements is a completely different question though. They will have different values of GetDate() taken from the start of every separate statement.
Jörgen Andersson 10-Jun-20 3:57am View
GetDate() is not a volatile function, it is a non-deterministic runtime constant.
Jörgen Andersson 7-May-20 13:30pm View
Jörgen Andersson 7-May-20 12:49pm View
Then there's not much more to do. You will probably have a merge join taking half the time and two index scans (preferably clustered) taking the other half.
Jörgen Andersson 7-May-20 12:42pm View
I suspect that's as good as it gets when you have a self join on a table with 5M rows.
I guess you have an index on ID,ZIP,START,END
Jörgen Andersson 7-May-20 12:00pm View
Badly I suspect.

I also haven't tested it for overlapping ranges. And when talking about those I see a possible bug. Exchange the Order By for the second Row_Number to EndDate.
Jörgen Andersson 7-May-20 8:19am View
We still need to know how your source table looks like.
Jörgen Andersson 27-Apr-20 4:46am View
My experiences:
When using XSD.exe it's important to check whether you're using the 32 or 64 bit version, the 32 bit version crashed after five seconds while the 64 bit version actually ran for almost four hours before running out of resources. My conclusion is that if the file fit's in memory, the 64-bit version of XSD.exe would have done the job.

Instead I ended up using the XmlReader finding all element names using this code that I found on the net:
Dictionary<string, list<int="">> nodeTable = new Dictionary<string, list<int="">>();
using (XmlReader reader = XmlReader.Create(documentPath))
while (reader.Read())
if (reader.NodeType == XmlNodeType.Element)
if (!nodeTable.ContainsKey(reader.LocalName))
nodeTable.Add(reader.LocalName, new List<int>(new int[] { reader.Depth }));
else if (!nodeTable[reader.LocalName].Contains(reader.Depth))
Console.WriteLine("The node table has {0} items.", nodeTable.Count);
foreach (KeyValuePair<string, list<int="">> kv in nodeTable)
Console.Write("{0} [{1}]", kv.Key, kv.Value.Count);
for (int i = 0; i < kv.Value.Count; i++)
if (i < kv.Value.Count - 1)
Console.Write("{0}, ", kv.Value[i]);

And then searching the document for all Elements, cutting and pasting all Items to a new document containing all existing elements. And then I ran Xsd.exe on that file.
Jörgen Andersson 20-Apr-20 14:52pm View
The problem is just that I didn't create the XML-file myself.
My first goal is to find all the classes and fields.
If I can use a tool I'll save a lot of time.
Jörgen Andersson 20-Apr-20 14:33pm View
There is definitely a lot of "redundant" data in it, every record seems to be between 10kb to 20kb approximately
The problem is just that while some part are always there some others come and go. Or lots of nullable fields if you want to think that way.
Jörgen Andersson 20-Apr-20 13:44pm View
Loading the file into a dataset also runs out of memory.
Jörgen Andersson 20-Apr-20 13:43pm View
Yeah, but the problem is that I don't have the dataset, I have an XML file 80 GB in size, so I can't even create the dataset without running out of memory.
At least not without a serious upgrade in RAM. :)
Jörgen Andersson 20-Apr-20 13:22pm View
I'm looking at that at the moment, but it's not an out of the box solution. It seems to need quite some work.
Jörgen Andersson 20-Apr-20 13:05pm View
Getting an OutOfMemoryException.
It seems like it's trying to add the whole file to memory.
Jörgen Andersson 20-Apr-20 12:54pm View
That's the backup plan
Jörgen Andersson 20-Apr-20 12:53pm View
i'll try it
Jörgen Andersson 11-Apr-20 8:28am View
Link to execution plan doesn't work, it points back to this question.
Jörgen Andersson 9-Apr-20 4:19am View
What error do you get when you DON'T insert the textboxes outside the loop?
Jörgen Andersson 9-Apr-20 4:06am View
No worries.

I don't have a problem understanding why people want to use a simpler to use method, what I don't understand is why MS doesn't fix it. (Or actually remove it)
Jörgen Andersson 8-Apr-20 15:18pm View
No, AddWithValue is NOT the recommended approach, more info on why here:
tldr; AddWithValue might assume the wrong type if it's making the wrong guess from the value which may or may not create an error.
Jörgen Andersson 25-Mar-20 3:42am View
I'm sorry my solution didn't help you. I will not help you again.
Jörgen Andersson 25-Mar-20 2:48am View
Is this always going to be the same two hours per day. Or is it two hours from the first occurrence?
Jörgen Andersson 24-Mar-20 14:42pm View
For the record, I do not consider myself at the level of Richard. :)
Jörgen Andersson 18-Mar-20 11:36am View
Normally you limit the number of rows by using OFFSET FETCH NEXT.

If you have an older version of oracle you need to use Rownum.

What version is your Oracle?
Jörgen Andersson 6-Mar-20 3:22am View
What database is it?
Jörgen Andersson 2-Mar-20 10:30am View
Just remembered, there is one more thing you can try:

But use it with caution, if you set it on the wrong table you can easily run out of memory.
Jörgen Andersson 2-Mar-20 9:29am View
Sounds like you have caught the 'EAV disease' also known as 'One True Lookup syndrome'. We've all been there. Don't.

Or at least read this before going any further:
Jörgen Andersson 2-Mar-20 6:26am View
No, I'm suggesting that there's not much you can do to force a rowlock in this case. Sql-Server is going to refuse because it tries to keep a balance between the cost of locking vs the cost of concurrency.
When you only have one level in the index a page/table lock is much more efficient, but to explain why I would need to explain how an index works.
Luckily someone has already done that better than I would:
Jörgen Andersson 28-Feb-20 3:00am View
No, it's quite possible to have more than one manager. For example one department manager at the same time as you have one or more project managers.
To handle it properly you obviously need a different db-structure though.
Jörgen Andersson 28-Feb-20 1:29am View
Can you show us the query?
Jörgen Andersson 24-Feb-20 14:52pm View
I made my response an answer so I could add some code
Jörgen Andersson 24-Feb-20 7:40am View
Also, stop using AddWithValue!
Jörgen Andersson 24-Feb-20 7:39am View
Also, stop using AddWithValue!
Jörgen Andersson 21-Feb-20 5:21am View
I suspect the difference depends on whether the table fits in one page or not.
If the table fits in one page it will not make a big enough difference what type of lock is taken.
The server always writes a whole page to disc no matter how many rows you write inside the page. This is because of how hard drives work.
When you lock one row, you don't lock it on the hard drive, you lock it on the buffer cache. The changes made still needs to be written to the disc.
And when a page is written to the disc there will be a page lock taken on the buffer to prevent you from changing it while writing.
Jörgen Andersson 14-Feb-20 3:04am View
The rowlock hint is just exactly that, a hint. The optimizer may choose to ignore it.

The reason for that is that it probably won't use the index anyway when the table is so small that it's only one level deep. So there is basically no performance gain in using an index

That would be especially true if your index isn't covering all the selected columns.
Then it would need to first seek the index and then make a lookup into the table. Which would be twice the amount of operations and a lot slower.

Test it again prefixing the query with SET SHOWPLAN_TEXT ON; and post the resulting plan here.
Jörgen Andersson 13-Feb-20 7:58am View
They are Varchars, so they don't actually take more space than needed.
That said, one should never define a column larger than what is necessary, as that is part of the sanity check of the data.
Jörgen Andersson 27-Jan-20 8:38am View
Oh bugger. It turned up in active questions.
Probably spam again.
We need a filter for that.
Jörgen Andersson 23-Jan-20 9:20am View
I wouldn't use FORMAT as it changes the type to a string.
Better use Round((3990-460)/CAST(95 as Decimal),2,1) or CAST((3990-460)/CAST(95 as Decimal) AS Decimal(7,2))
Jörgen Andersson 17-Jan-20 6:23am View
This is not a query, this is nested and looped procedure. Yes it looks very slow.
You're selecting your data from a lot of temp tables, but how does the table that the data comes from look like? How does the data look like.
What do you want to achieve?
Jörgen Andersson 10-Dec-19 9:40am View
Well, you need to tell us what the failure is.
Jörgen Andersson 10-Dec-19 7:46am View
what error do you get when it fails?
Jörgen Andersson 24-Nov-19 14:08pm View
This is old, but the principles are still the same:
Jörgen Andersson 2-Oct-19 5:54am View
I bet you're having duplicate rows with PriceLastDate = MAX(PriceLastDate)
Jörgen Andersson 13-Sep-19 4:05am View
None, but it's considered to be screaming, and it makes it harder to read.
Jörgen Andersson 7-May-19 5:30am View
There is nothing identifying what should become Col1 and Col2.
And how would you handle the case where there are three rows for an ID?
Jörgen Andersson 21-Feb-19 11:05am View
Need coffee
Jörgen Andersson 21-Feb-19 7:55am View
There are four ways to return data in your link
Jörgen Andersson 15-Feb-19 1:58am View
Quite right you are.
Jörgen Andersson 10-Feb-19 13:27pm View
Hold on to that thought, EAV has affected the sanity of many developers. More on that here:

That said, it's probably the best way to go still, and as you already have realized you need (at least) two tables, one for the actual data and one for the attributes.
Next step is to keep it all as simple as possible. Whenever management comes to you wanting the statistics on one object combined with a second, depending on the values of a third while keeping the options open for a fourth still nonexistent object, it's time to look for a new job. :-)

Note that using xml or json in a document database is just another method of doing the same thing. but slightly less rigid and slightly more prone to break.

Oh, and don't do option 2, ever!
Jörgen Andersson 10-Dec-18 15:13pm View
Use a pivot
Jörgen Andersson 3-Dec-18 6:58am View
Tell microsoft :-)

They all have their uses. At my previous job we ended up using both enumerated paths and adjacency lists, for performance reasons.
A pretty good overview here:
Jörgen Andersson 3-Dec-18 6:14am View
It's an enumerated path. Which is one of the three major ways to store hierarchical data. Same as MS is using in the hierarchyid
Jörgen Andersson 3-Dec-18 6:09am View
It's an enumerated path, as opposed to an adjacency list or nested Set
Jörgen Andersson 12-Oct-18 4:56am View

And also, what database?
Jörgen Andersson 8-Sep-18 15:39pm View
"Googled a lot not found any solution"
I wonder what you searched for then?
Try to google: "access update sql query from another table" and you will get several results that give you the info you need.
For example:
Both on the first page.
Jörgen Andersson 3-Sep-18 2:14am View
To little information to say, but it could be someone experimenting with a Columnstore index.
Jörgen Andersson 24-Aug-18 1:27am View
Updated solution.
But you still need to add logic for knowing when to replace four or five characters
Jörgen Andersson 26-Jun-18 2:44am View
The error message is quite clear, the input to the bind variable is too long.
But to be able to help you we would need to see the relevant code.
Jörgen Andersson 8-Jun-18 7:19am View
Dammit, now I'll have to do it. :-)
Jörgen Andersson 7-Jun-18 9:40am View
It's not complicated, but it's still over five hundred edges that needs to be entered manually.
Jörgen Andersson 7-Jun-18 1:57am View
The challenge is to encode all those ">".
I think I'll pass.
Jörgen Andersson 1-Jun-18 15:11pm View
But you're trying to store Chars in those DATE columns, that won't work.
If you want a part of a date field you should use EXTRACT.
But that should be done when you need the part, it should never be stored like that, that's bad practice.
Jörgen Andersson 14-May-18 6:32am View
Oh, I knew it was wrong alright, I just couldn't wrap my head around to get it right.
I have meanwhile solved it though, using a nested Groupby. But your solution is better and as simple as I thought it should be.
Jörgen Andersson 9-May-18 4:01am View
Use "Improve question" instead of posting a new solution
Jörgen Andersson 7-May-18 2:05am View
Use a case statement.
Jörgen Andersson 3-May-18 6:57am View
And what's wrong?
Jörgen Andersson 14-Mar-18 4:56am View
Your procedure is also returning the number of rows inserted.
After the very first BEGIN, add "SET NOCOUNT ON;" and it will only return the result of the "SELECT @Status"
Jörgen Andersson 9-Mar-18 9:10am View
Clarified it, there are people for whom coffee isn't enough. :)
Jörgen Andersson 9-Mar-18 8:57am View
I thought I did: "But the performance is mostly depending on the conditions (and indexes) you use to determine which rows that already has been copied between the tables."
Jörgen Andersson 8-Mar-18 14:28pm View
You're welcome.
Jörgen Andersson 8-Mar-18 14:28pm View
I'm curious as to whether the reason that the key doesn't work with the Stored procedure is because you're returning a cursor.
The cursor doesn't have a key. I would try to return a different kind of resultset instead, possibly an associative array (dictionary in DotNet).
Just speculating here.
Jörgen Andersson 7-Mar-18 15:56pm View
Yes, I can see what you're doing and why it doesn't work, Peter is absolutely right.

Explain in words what you want to achieve and we might be able to help you.
Jörgen Andersson 7-Mar-18 14:23pm View
I've been digging a bit deeper into what's happening here.
It seems that using a stored procedure and a cursor prohibits the OracleDatareader from retrieving the key information. And in this case the whole document needs to be retrieved directly. (InitialLONGFetchSize = -1)

To find out if this is the case you can open the schematable and check the field info, DataTable SchemaTable = reader.GetSchemaTable();

Another test you can do is to retrieve the information directly by removing the output parameter, setting the CommandType = System.Data.CommandType.Text and the commandText = "SELECT dj.IDT,, dj.doc
WHERE dj.id_mvt = m.id_mvt
AND m.id_unite_m = :pin_id_unite_m;

Jörgen Andersson 28-Feb-18 6:22am View
One last question, do you start with an empty target table or not?
Jörgen Andersson 28-Feb-18 5:55am View
How about the series (a,1), (b,2), (b,1) is that also impossible?
Jörgen Andersson 28-Feb-18 4:50am View
So, how would you handle this case:
First you add (a,1) it gets UniqueID 1
Then add (b,2), this will get a new UniqueID 2
How do you handle the next row containing (a,2)?

Or is all data supposed to be added as a set in one go?
Jörgen Andersson 28-Feb-18 3:23am View
How would you handle (a,1),(b,1),(b,2),(c,2),(c,3) ?
Shall they all get the same unique ID?
Jörgen Andersson 12-Feb-18 15:26pm View
You should probably exchange WHEN ISNULL(x.Number,'') = '' THEN 0 for WHEN x.Number IS NULL THEN 0
It saves one operation and a cast and is probably more correct in the different case where both tables actually contains ''
Jörgen Andersson 1-Feb-18 4:51am View
Solution 1 is answering your actual question.
But the question we need to ask is: Why do you want to set the ID in the identity column manually?
Jörgen Andersson 31-Jan-18 1:55am View
I suspect it is 'Something went wrong'
Jörgen Andersson 30-Jan-18 7:46am View
You could probably exchange 2018 for YEAR(SYSDATETIME())
Jörgen Andersson 24-Dec-17 7:14am View
You need to output a collection type, an array or a cursor.
Jörgen Andersson 24-Dec-17 7:12am View
He's using Oracle, not Sql Server
Jörgen Andersson 18-Dec-17 15:18pm View
That's an answer, not a comment.
But add the fact that the default value should be specified in the stored procedure
Jörgen Andersson 28-Nov-17 6:58am View
I'm stumped!
It's one of these things I've "known" since forever. I'll have to check if I've always been wrong or if they have changed it.
Jörgen Andersson 28-Nov-17 4:53am View
You should seriously rethink your solution, it's not normalized and this is just the first problem you'll encounter.
Do follow F-ES Sitecores suggestion.
Jörgen Andersson 28-Nov-17 4:47am View
I thought you didn't touch VB.Net. :-)
It was almost flawless, but alas, this construct won't work: Dim b As Boolean = i = 0
I'd recommend: Dim b As Boolean = IF(i = 0,True,False)
Jörgen Andersson 13-Nov-17 4:42am View
Could you update the question with the table definitions?
Jörgen Andersson 28-Sep-17 13:54pm View
Oh yes indeed.
Jörgen Andersson 28-Sep-17 5:36am View
Um, don't teach people to use AddWithValue, it can lead to plan cache pollution and type inference errors.
More on that here:
And here:
Jörgen Andersson 28-Sep-17 4:36am View
Glad to help.
Jörgen Andersson 27-Sep-17 4:57am View
Just curious.
You accepted my solution so it must have worked.
How much faster did it get?
Jörgen Andersson 19-Jun-17 4:07am View
You only set one target column in the Update Clause. You probably need something like this:
MERGE od_test2 T --target
USING OD_TEST S --source
ON S.ref = T.REF
WHEN s.stepname IN ('l2','l3') THEN CONVERT(VARCHAR,s.[ENDDATE])
THEN INSERT (Target_Column_Names)
VALUES (Source_Column_Names)

Jörgen Andersson 5-Jun-17 3:58am View
What version of SQL Server are you using?
Jörgen Andersson 3-Jun-17 4:45am View
You're quite right, this was a quick writeup to explain the principle.
Good feedback on the comparison, I've never experienced the turkish ı problem.
The question is, what's the best way handle that in SQL? Upper isn't much better since there is a dotted upper case İ as well.
<edit>scrap that, SQL server isn't case sensitive using the default collation.</edit>
Jörgen Andersson 22-Apr-17 14:52pm View
Stern-Brocot and Farey are obviously related, now that I know about Farey.
I would have linked to wikipedia if I had been close to a computer, I am alas to lazy to do that on the phone.
Jörgen Andersson 22-Apr-17 13:15pm View
I knew that as a Stern-Brocot binary search tree.
Jörgen Andersson 5-Apr-17 5:28am View
Pivot should work, show us your code.
Jörgen Andersson 12-Mar-17 16:58pm View
Me thinks you're complaining to much.
Your code certainly wasn't faster, unless the competition was about shaving away as much code as possible from being measured.

It's easy enough to show whose code is the fastest if you loop the code.
You can do that by reading words from a file, create a key, and add the key and word to a dictionary and measure the complete time needed for that.
Oh, we already did that, didn't we. :laugh:
Jörgen Andersson 12-Mar-17 16:58pm View
Me thinks you're complaining to much.
Your code certainly wasn't faster, unless the competition was about shaving away as much code as possible from being measured.

It's easy enough to show whose code is the fastest if you loop the code.
You can do that by reading words from a file, create a key, and add the key and word to a dictionary and measure the complete time needed for that.
Oh, we already did that, didn't we. :laugh:
Jörgen Andersson 7-Mar-17 1:39am View
Of course there would be a difference if you copy over the elements from an IList to a new List. The fact that you add time if you do unnecessary work doesn't mean anything.
So, explain properly how the code is measuring any more wrong than yours.
We're both passing a reference, from a Dictionary!
Jörgen Andersson 6-Mar-17 14:38pm View
It is faster on a newer pc, mine and your solution is having almost the same speed on my computer.
But they should have, the important parts are almost the same code as well.
Not just the lookup against a dictionary, but compare your GetKey with the first version of my solution. It only differs by the GetHashCode() part.

But I'm seeing a problem with this part of your new code.
When you create your key you're taking out the hashvalue from the sorted string. It's unnecessary, it's done internally in the dictionary anyway.
But the dictionary is having handling for hashcollisions builtin as well. I cannot see any handling of hashcollisions in your code at all.

GetHashcode is an Int32 BTW, so if you pass a long to the dictionary it will calculate the hashcode like (unchecked((int)((long)m_value)) ^ (int)(m_value >> 32)) instead of just passing it through which would be the case with an Int32.
Jörgen Andersson 6-Mar-17 13:58pm View
Just because something implements IEnumerable doesn't mean it's a lazy loading iterator method.
ILookup is implemented by the Lookup class (,41)
And when you do a lookup against the Lookup :-), it returns an Igrouping. Which is implemented by a Grouping.(,7bb231e0604c79e3)
Which is implementing, wait for it, an IList. Internally it's implemented using an array, but that's the case in a List as well.
Anyway, the code is actually just passing a reference after a lookup against a dictionary, just like you do in your code.

Which brings us to the other one which is a plain oops!
I was experimenting with some stuff and forgot to change it back to the previous state. (Which you can find if you compare the versions of the solution)
Uploading fix and new timings in a couple of minutes.

Jörgen Andersson 5-Mar-17 7:35am View
You can probably speed up the solution quite a bit by assuming 32 letters in the alphabet and use a bitshift.
Jörgen Andersson 3-Mar-17 18:06pm View
Is really arrest an anagram for teaser?
Arrest has two R, while teaser has two E.
Jörgen Andersson 3-Mar-17 14:12pm View
If I'm allowed to use a database with a function based index it could definitely be done in fractions of a second
Jörgen Andersson 24-Jan-17 6:30am View
What values does Table _B have for [assessmentsummaryref] in your example?
Jörgen Andersson 25-Nov-16 14:33pm View
What about handling perfectly legal words such as Nincompoop?
Jörgen Andersson 25-Oct-16 9:06am View
What version is your database?
Jörgen Andersson 25-Oct-16 4:46am View
Sounds like a conversion problem to me.
What datatype is the date column, is it Date?
Is there any business layer between CR and DB or is CR accessing the DB directly?
Jörgen Andersson 5-Oct-16 3:16am View
If you know what you're doing there is no difference between stored procedures and using plain SQL. And there shouldn't be.
Most common reason for SP to be faster is that the plan is cached, but if you use parameters WITH DEFINED SIZES, your plain SQL would also use the cache.
The most common reason for SP to be slower than plain SQL is when you are using dynamic queries. Then the compiled plan is good only for a certain set of parameters. One solution is to use the WITH RECOMPILE option when calling the SP.
Jörgen Andersson 25-Sep-16 15:23pm View
Why do you drop and create instead of simply truncating the table?
Jörgen Andersson 9-Jun-16 4:31am View
He doesn't get an error, he's using "On Error Resume Next", ;-)
Jörgen Andersson 24-May-16 15:57pm View
Get a book. I'd recommend this:
Jörgen Andersson 12-May-16 9:45am View
This question had an accepted answer 2½ years ago.
Jörgen Andersson 11-Apr-16 4:51am View
You're mixing implicit joins with ANSI joins. While technically possible (unless it's MySQL IIRC), you're ending up in a syntactical quagmire.
Decide for one join syntax. I'd recommend ANSI for clarity.
Jörgen Andersson 26-Mar-16 4:14am View
Updated solution
Jörgen Andersson 10-Mar-16 7:05am View
And your problem is?
Jörgen Andersson 3-Mar-16 4:44am View
Pass a collection
Jörgen Andersson 2-Mar-16 5:43am View
How many connections are you using in the TransactionScope?
Jörgen Andersson 29-Feb-16 7:15am View
Why not?
Jörgen Andersson 29-Feb-16 7:03am View
Go to and look for "Attach a database file, located in the data directory, on connect to a local SQL Server Express instance"
Jörgen Andersson 29-Feb-16 5:12am View
Have a look at the connection string, I guess it's pointing to your pc and not a local file.
Jörgen Andersson 24-Feb-16 15:11pm View
Glad it worked, I'm just a bit curious to know which other changes were necessary?
Jörgen Andersson 24-Feb-16 5:33am View
Can you create an example fiddle with some of your data here:
and post the link to the fiddle?
Jörgen Andersson 24-Feb-16 4:33am View
Remove the group by
Jörgen Andersson 24-Feb-16 2:47am View
Just look at the first row of your query, you're using the same field for both the Driver and Commuter.
Jörgen Andersson 24-Feb-16 2:35am View
Is the error message by any change ORA-22992?
Then take a look at this link:
Jörgen Andersson 24-Feb-16 2:32am View
And what's the error message?
Jörgen Andersson 17-Feb-16 15:13pm View
In which way does it not work when user selects OR for the where clause?
Jörgen Andersson 12-Feb-16 5:10am View
What's the format of the dates in the "starttime" column?
Jörgen Andersson 10-Feb-16 15:55pm View
Agree with Sascha
Jörgen Andersson 8-Feb-16 3:13am View
You're breaking the first Normal Form right there.
While Asifs solution fixes your immediate problem it's not a long term solution. Read up on normalization
Jörgen Andersson 5-Feb-16 4:26am View
I'm willing to bet that the optimizer would realize that a join is the same ting in this case and create such a plan in any case. I'd keep the readability.
Jörgen Andersson 5-Feb-16 3:53am View
The only optimization I can think of is to put both CTEs into one, like this:
with cte as
FROM yourtable
HAVING Min(TargetDate) >= @FromTargetDate
AND Max(TargetDate) <= @ToTargetDate
But it won't make any difference for performance.
Jörgen Andersson 29-Jan-16 3:39am View
Never concatenate strings into a query, always use parameters.
Jörgen Andersson 24-Jan-16 18:32pm View
BTW, why all those TEMP Tables?
Is the server so old it doesn't support CTEs?
Jörgen Andersson 24-Jan-16 15:52pm View
To make sure the field sizes of #TEMP1 and #TEMP2 are identical?
Or maybe to get it logged?
Jörgen Andersson 24-Jan-16 15:38pm View
Can a MemberItemCode be used by more than one MemberNo?
Jörgen Andersson 21-Jan-16 2:26am View
And what's your question?
Jörgen Andersson 16-Jan-16 14:40pm View
More information please, when does it happen? What's the circumstances?
Jörgen Andersson 14-Jan-16 15:58pm View
One reason I prefer method syntax to query syntax is because it doesn't look like SQL.
As a beginner you might get confused into expecting it to work like SQL, which it certainly does not. (Well, mostly)
Jörgen Andersson 14-Jan-16 15:44pm View
Jörgen Andersson 13-Jan-16 17:23pm View
If you want examples you should take a look here:[^]
Jörgen Andersson 7-Jan-16 7:43am View
Can we get the full exception?
Jörgen Andersson 8-Dec-15 1:56am View
And your error is?
Jörgen Andersson 4-Nov-15 5:49am View
Is it the first and last date per dsg_code you need?
Jörgen Andersson 3-Nov-15 15:46pm View
That's simply wrong.
Count(*) is specifically defined in SQL92 to return the cardinality of the query.
And counting the rows where 1 is not null is less efficient than just counting the rows.
That said, the optimizer would realize that it's the same thing and give you the same plan.
Jörgen Andersson 29-Oct-15 4:22am View
It's odd that you basically can't find any info on that online.
When done correctly there is nothing that is as fast as a database, they are highly optimized for filtering and aggregation.

My rule of thumb is to keep the size of the datasets to a minimum in every step, so if you can use CTEs to filter and aggregate before joining you can gain several magnitudes in performance.
Jörgen Andersson 29-Oct-15 3:59am View
Jörgen Andersson 29-Oct-15 3:58am View
Jörgen Andersson 29-Oct-15 3:58am View
It's just about keeping the order of execution in mind.

First the database does all the Joining.
Then all the filtering (Where Clause)
Then Aggregation (Group By)
Filtering on Aggregates (Having Clause)
Analytic Functions
And finally Order by.

This is one of the most powerful aspects of CTEs, they allow you to change the order of execution.

Also note that there's no filtering on Analytic functions, so a CTE is necessary here.
Jörgen Andersson 28-Oct-15 14:30pm View
Sorry 'bout that. I realized he/she is Swedish and my brain swapped languages.
Jörgen Andersson 28-Oct-15 11:30am View
Don't use a comma separated string, pass your codes as an array.
Jörgen Andersson 18-Oct-15 7:15am View
What database and version thereof are you using?
Jörgen Andersson 1-Oct-15 8:27am View
You can't refer to the result from one column in another column. You need to use a CTE or a subquery.
Jörgen Andersson 14-Sep-15 4:13am View
The payload just arrived in the spam queue
Jörgen Andersson 9-Sep-15 5:01am View
Check the datatypes, if it should be NVarchar instead of Varchar
Jörgen Andersson 9-Sep-15 4:55am View
Jörgen Andersson 8-Sep-15 1:53am View
I'm wondering over the connection between District and village, they are both usually geographical entities and should normally have a direct relation, and not via a user.
Jörgen Andersson 7-Sep-15 10:16am View
A day when you learn something new is a good day.
I never knew about partitioned joins. Simplifies stuff immensely
Jörgen Andersson 4-Sep-15 7:09am View
regionid NUMBER() NOT NULL,
productid NUMBER() NOT NULL,
saledate DATE NOT NULL,

This of course assumes that region and product is stored in other tables and the IDs are foreign keys
Jörgen Andersson 4-Sep-15 6:55am View
In datesequence, now it creates all three months of the first and last quarter.
Jörgen Andersson 4-Sep-15 5:53am View
Have changed the query to reflect the new data.
The creation of the datesequence now creates the whole quarters
Also added a group by since your new data has duplicated rows, if this is a mistake, simply omit the group by and sum()
Jörgen Andersson 4-Sep-15 4:33am View
I don't understand, can you please elaborate
Jörgen Andersson 4-Sep-15 4:00am View
Same principle, just add region wherever you have product in the query.
Jörgen Andersson 1-Sep-15 2:39am View
Oh, by the way. Using all capitals is considered to be shouting and is bad form.
Jörgen Andersson 31-Aug-15 8:06am View
I have a feeling you're trying to solve the wrong problem.
Jörgen Andersson 29-Aug-15 16:45pm View
Doing a lot for the performance is of course only valid if you're reusing the Command with the same CommandText and the same Parameters (but different values)

I believe you might be mixing up "Parsing the Command" with Reusing the plan.

When you run Command.Prepare() the Command is sent to the server for parsing and creation of a plan. A Handle for the plan ID is returned to the Command.
So when you execute a prepared command the Plan ID is sent together with the Parameter values and the parsing as well as the plan creation is bypassed.

If you don't prepare a Command before execution it will be parsed and a plan will be created on the first execution.
On the subsequent executions of the Command it will still get parsed on every execution but the plan will get reused from the plan cache if suitable.
So the parsing of the Command will happen on every execution if the Command is not prepared.

Take a look at the code at,6d1f148f36c73572
EXECTYPE is the Enum for whether the command is prepared or not.
_prepareHandle is the Plan ID
Jörgen Andersson 28-Aug-15 18:46pm View
It's doing quite a lot for performance since the database doesn't need to parse the query more than once.
But it's not useful for stored procedures.
Jörgen Andersson 28-Aug-15 18:13pm View
Am I allowed to plug my own high performance solution here?
Jörgen Andersson 28-Aug-15 18:11pm View
you forgot the Command.Prepare()
Jörgen Andersson 28-Aug-15 7:16am View
Jörgen Andersson 25-Aug-15 3:31am View
Are you using System.Data.OracleClient or Oracle.DataAccess.Client?
Jörgen Andersson 21-Aug-15 7:56am View
Why do you want to recalculate the IDs?
A surrogate key does not need to be recalculated and shouldn't be recalculated.
Gaps in a sequence does not affect the database the slightest.
The only reason I can see is if you want to feed your OCD.
Jörgen Andersson 21-Aug-15 3:01am View
What data types are those fields in the Excel sheet?
Jörgen Andersson 21-Aug-15 2:53am View
Not necessary, conversion is implicit.
Bigint is btw not a datatype in Oracle. You're mixing it up with SQLServer.
Jörgen Andersson 21-Aug-15 2:46am View
To properly normalize, the treat_info table should be connected to the other table via their IDs.
Jörgen Andersson 19-Aug-15 9:48am View
SUBSTR() is correct, I don't see why you got downvoted. Have a compensation.
Jörgen Andersson 19-Aug-15 9:47am View
Nope that doesn't work, your link is for MySQL, nowadays owned by Oracle.
For Oracles own implementation you need to use SUBSTR().
Jörgen Andersson 12-Jul-15 18:52pm View
I can't immediately see what's wrong, but I have a few notes.
You should probably change
TO_CHAR(chgroup.created,'YYYY-MM-DD') >= '2015-06-24' AND TO_CHAR(chgroup.created,'YYYY-MM-DD') <= '2015-06-30'
chgroup.created BETWEEN TO_DATE('2015-06-24','YYYY-MM-DD') AND TO_DATE('2015-06-30','YYYY-MM-DD')
as it allows you to use an eventual index on chgroup.created.

And a last question, in what way is the count wrong?

I'm also wondering if CURRENTWEEK_COUNT equals the last seven days or if it represents a work week, same goes for LASTWEEK_COUNT and LASTMONTH_COUNT
Jörgen Andersson 28-Jun-15 14:38pm View
Why do you wake up a five year old question that HAS an accepted answer?
Jörgen Andersson 25-Jun-15 2:46am View
Principally correct, but not complete. You could add some information on linked servers.