Unexpected performance results comparing INT columns with BIGINT for IDENTITY columns

I was asked to perform a performance test using SQL Server 2008. As part of this, I compare the IDENTITY column speed as a PC using INTs and BIGINT. I have a simple procedure for creating 100,000 rows for each type and insert time. The script looks like this:

SET NOCOUNT ON CREATE TABLE TestData ( PK INT IDENTITY PRIMARY KEY, Dummy INT ) DECLARE @Rows INT DECLARE @Start DATETIME SET @Rows = 100000 SET @Start = GETDATE() WHILE @Rows > 0 BEGIN INSERT INTO TestData (Dummy) VALUES (@Rows) SET @Rows = @Rows - 1 END SELECT @Start, GETDATE(), DATEDIFF(MS, @Start, GETDATE()) DROP TABLE TestData 

To test the BIGINT IDs, I use a very slightly modified version:

 SET NOCOUNT ON CREATE TABLE TestData ( PK BIGINT IDENTITY PRIMARY KEY, Dummy INT ) DECLARE @Rows INT DECLARE @Start DATETIME SET @Rows = 100000 SET @Start = GETDATE() WHILE @Rows > 0 BEGIN INSERT INTO TestData (Dummy) VALUES (@Rows) SET @Rows = @Rows - 1 END SELECT @Start, GETDATE(), DATEDIFF(MS, @Start, GETDATE()) DROP TABLE TestData 

To my surprise, the BIGINT version is much faster than the INT version. The INT version on my test suite takes about 30 seconds, and BIGINT takes about 25 seconds. The provided test kit has a 64-bit processor. However, it works with 32-bit Windows and the 32-bit version of SQL Server 2008.

Can someone else recreate, reject, confirm or challenge the results or indicate if I missed something?

+4
source share
4 answers

To take another step, do the same with VARCHAR, for example:

 SET NOCOUNT ON CREATE TABLE TestData ( PK VARCHAR(8) PRIMARY KEY, Dummy INT ) DECLARE @Rows INT DECLARE @Start DATETIME SET @Rows = 100000 SET @Start = GETDATE() WHILE @Rows > 0 BEGIN INSERT INTO TestData (PK, Dummy) VALUES (CONVERT(VARCHAR(8), @Rows), @Rows) SET @Rows = @Rows - 1 END SELECT @Start, GETDATE(), DATEDIFF(MS, @Start, GETDATE()) DROP TABLE TestData 

I expect this to be much slower since the script defines the "identity" column, and there are row conversions. Also, I made VARCHAR (8) according to the number of bytes with bigint. However, in my tests this works faster than the INT test from above.

What I take from this is that inserting records into an empty table will be pretty fast, no matter what you throw on it. The consequences of performance on the road, that is, other indexes in the table, inserting rows when the table already contains a lot of data, etc., Probably much more important.

+1
source

Server1 . On SQL2005 SP3 64-bit ... I just tried (INT then BIGINT) and got 2.9 and 2.6 seconds. Then increasing the rows to 500,000 I got 15.2 and 15.3 seconds.

  • Next, execute 500K INT / BIGINT: 14.0 / 14.6s; 14.0 / 15.3s; and 14.7 / 15.3 s. Thus, INT is 5.8% faster than BIGINT.
  • Change of order on BIGINT / INT: 15.4 / 13.8s; 15.3 / 15.4s; and 12.9 / 12.7 s. INT is 4% faster here.

Server2 . On SQL2000 SP4 EE ...

  • INT / BIGINT: 13.7 / 10.9s; 10.4 / 13.9s; 9.9 / 10.2s. INT is 2.9% faster.
  • Then switch to BIGINT / INT: 10.2 / 13.3s; 10.2 / 10.1s; and 11.2 / 10.0 s. BIGINT is 5.7% faster.

Basically INT is often, but not always faster than BIGINT, but not something close to the dispersion that I see during the run.

+1
source

Just guess: have you ever tried to test BIGINT and INT first? Database servers like to keep things in memory to optimize such operations ...

0
source

I tried this on my SQL2008. INT takes 14 seconds. BIGINT takes 18 seconds.

0
source

All Articles