Insert / update data in a table from a stored procedure result set

I have a proc_item stored procedure that retrieves data from different tables (using join). I want the result set from a stored procedure to be inserted (if new data) or updated (if data already exists) in another table called Item. Can someone give me some idea on how I can do this? I am new to sql, stored procedures and loops.

thanks

+4
source share
3 answers

Take a look at @srutzky's solution which is more suitable for this problem


The first thing you could do is write everything in a table. To do this, you need to define a table that has the same columns as in your stored procedure:

DECLARE @myTempTableName TABLE( dataRow1 DATATYPE, dataRow2 DATATYPE, ... ) INSERT INTO @myTempTableName(dataRow1, dataRow2,...) EXEC( *mystoredprocedure* ) 

Now all the data you need is in the table. The next step is to check what you need to update and what you need to insert. Let's say datarow1 is a variable that checks whether it exists or not (for example: one name or the same identifier) And let it be unique (otherwise you need something unique needed to iterate through a temporary table)

 DECLARE @rows INT, @dataRow1 DATATYPE, @dataRow2 DATATYPE, ... -- COUNT Nr. of rows (how many rows are in the table) SELECT @rows = COUNT(1) FROM @myTempTableName -- Loop while there are still some rows in the temporary table WHILE (@rows > 0) BEGIN -- select the first row and use dataRow1 as indicator which row it is. If dataRow1 is not unique the index should be provided by the stored procedure as an additional column SELECT TOP 1 @dataRow1 = dataRow1, @dataRow2 = dataRow2, .. FROM @myTempTableName -- check if the value you'd like to insert already exists, if yes --> update, else --> insert IF EXISTS (SELECT * FROM *TableNameToInsertOrUpdateValues* WHERE dataRow1=@dataRow1 ) UPDATE *TableNameToInsertOrUpdateValues* SET dataRow2=@dataRow2 WHERE dataRow1=@dataRow1 ELSE INSERT INTO *TableNameToInsertOrUpdateValues* (dataRow1, dataRow2) VALUES (@dataRow1, @dataRow2) --IMPORTANT: delete the line you just worked on from the temporary table DELETE FROM @myTempTableName WHERE dataRow1= @dataRow1 SELECT @rows = COUNT(1) FROM @myTempTableName END -- end of while-loop 

An announcement may be made at the beginning of this request. I put it in the place where I used it to make it easier to read.

Where I got part of my code and is also useful for iterating through tables (solution from @cmsjr without cursor): Cursor inside cursor

+1
source

You need to create a temporary table to store the results of the stored process and then merge the results into a table. A temporary table is recommended for a table variable, since it will work better with an existing table due to better statistics.

 CREATE TABLE #TempResults ( Field1 DATATYPE1, Field2 DATATYPE2, ..., PRIMARY KEY CLUSTERED (KeyField1,...) ) INSERT INTO #TempResults (Field1, Field2, ...) EXEC Schema.ProcName @Param1, ... 

Now there are two ways to merge. The first works in all versions of SQL Server, and the second uses the command that was introduced in SQL Server 2008.

 -- this should work on all SQL SERVER versions UPDATE rt SET rt.Field2 = tmp.Field2, ... FROM Schema.RealTable rt INNER JOIN #TempResults tmp ON tmp.KeyField1 = rt.KeyField1 ... INSERT INTO Schema.RealTable (Field1, Field2, ...) SELECT tmp.Field1, tmp.Field2, ... FROM #TempResults tmp LEFT JOIN Schema.RealTable rt ON rt.KeyField1 = tmp.KeyField1 ... WHERE rt.KeyField1 IS NULL 

OR

 -- the MERGE command was introduced in SQL SERVER 2008 MERGE Schema.RealTable AS target USING (SELECT Field1, Field2,... FROM #TempResults) AS source (Field1, Field2,..) ON (target.KeyField1 = source.KeyField1) WHEN MATCHED THEN UPDATE SET Field2 = source.Field2, ... WHEN NOT MATCHED THEN INSERT (Field1, Field2,...) SELECT tmp.Field1, tmp.Field2, ... FROM #TempResults tmp 

More information about the MERGE team can be found here:
http://msdn.microsoft.com/en-us/library/bb510625(v=SQL.100).aspx

Now, if you have a large result set for joining, and the table you merge into is very large and has a lot of activity on it, where this type of operation can cause some lock, then it can be looped to make sets of 1000 rows per times or something like that. Something like that:

 <insert CREATE TABLE / INSERT...EXEC block> CREATE TABLE #CurrentBatch ( Field1 DATATYPE1, Field2 DATATYPE2, ... ) DECLARE @BatchSize SMALLINT = ???? WHILE (1 = 1) BEGIN -- grab a set to work on DELETE TOP (@BatchSize) OUTPUT deleted.Field1, deleted.Field2, ... INTO #CurrentBatch (Field1, Field2, ...) FROM #TempResults IF (@@ROWCOUNT = 0) BEGIN -- no more rows BREAK END <insert either UPDATE / INSERT...SELECT block or MERGE block from above AND change references to #TempResults to be #CurrentBatch> TRUNCATE TABLE #CurrentBatch END 
+8
source

First you need to insert the data into a temporary container, such as a temporary table or a table variable. Then you can work with the table as usual: join it, output the result sets to it, etc.

Check this box for options for storing SP output in a table.

+1
source

All Articles