Click here to Skip to main content
15,881,588 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
this question hits to my mind related to performance tuning that
What is the best way of writing a SP to performance wise to pick up data in chunks from a table, Perform look up and insert into another table.
repeat the process until the main table is traversed completely.

What I have tried:

I know we can use
temp table,
temp variables or views in SP to achieve this but what would be the best way.
Posted
Updated 8-Mar-18 20:52pm
Comments
CHill60 8-Mar-18 9:14am    
The "best" way depends on many things and you have not supplied enough information for us to help. I doubt traversing the table in this way will improve the performance of such an activity, nor do I understand why you are inserting data that you already have into another table.
Edit your question to give us a better idea of what you are trying to achieve, and show us the code that you think could be improved.
ZurdoDev 8-Mar-18 9:28am    
Possibly SSIS but you haven't said enough to know for sure.

1 solution

One of the simplest ways to move data between tables in chunks is:
SQL
DECLARE @CHUNK_SIZE int = 10000;
DECLARE @RC int = @CHUNK_SIZE;

WHILE @RC >0
BEGIN
    INSERT INTO TargetTable (Mycolumns)
    SELECT  TOP @CHUNK_SIZE Mycolumns
    FROM    SourceTable
    WHERE   Conditions
    ;
    SET @RC = @@ROWCOUNT
    ;
END
Where the conditions need to exclude already processed rows.
The performance is mostly depending on these conditions (and indexes).
You can for example use NOT EXISTS, or use an ordered query and set a variable = Max(ID) from the output clause, or you can remove processed rows from the Sourcetable.
But as you've already been told, we don't have enough information to tell you the best way.
 
Share this answer
 
v2
Comments
CPallini 9-Mar-18 3:07am    
5.
Richard Deeming 9-Mar-18 8:56am    
Probably worth pointing out that, unless "Conditions" finds a way to exclude the already-processed rows, or you delete the already-processed rows, you're going to end up processing the same chunk of rows over and over again. :)
Jörgen Andersson 9-Mar-18 8:57am    
I thought I did: "But the performance is mostly depending on the conditions (and indexes) you use to determine which rows that already has been copied between the tables."
Richard Deeming 9-Mar-18 9:01am    
D'Oh!

OK, it might be worth making that bit more in-your-face obvious for old farts like me who need more coffee! :D
Jörgen Andersson 9-Mar-18 9:10am    
Clarified it, there are people for whom coffee isn't enough. :)

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900