As Paul writes: No, this is unsafe for which I would like to add empirical evidence. Create a table Table_1 with one ID field and one record with a value of 0 Then execute the following code simultaneously in two Management Studio query windows :
declare @counter int set @counter = 0 while @counter < 1000 begin set @counter = @counter + 1 INSERT INTO Table_1 SELECT MAX(ID) + 1 FROM Table_1 end
Then do
SELECT ID, COUNT(*) FROM Table_1 GROUP BY ID HAVING COUNT(*) > 1
On my SQL Server 2008, one identifier ( 662 ) was created twice. Thus, the default isolation level applied to single statements is insufficient.
EDIT: Obviously, the INSERT wrapper with BEGIN TRANSACTION and COMMIT will not be fixed because the default isolation level for transactions is still READ COMMITTED , which is not enough. Note that setting the transaction isolation level to REPEATABLE READ also insufficient. The only way to make this code safe is to add
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
up. This, however, caused deadlocks from time to time in my tests.
EDIT: The only solution I found that is safe and does not create deadlocks (at least in my tests) is to explicitly lock the table exclusively (the default transaction isolation level is sufficient here). Beware; this solution may result in reduced performance:
...loop stuff... BEGIN TRANSACTION SELECT * FROM Table_1 WITH (TABLOCKX, HOLDLOCK) WHERE 1=0 INSERT INTO Table_1 SELECT MAX(ID) + 1 FROM Table_1 COMMIT ...loop end...
Heinzi
source share