In SQL Server, do composite primary keys increase the likelihood of a deadlock?

I was wondering if, due to an increase in row locking and / or due to an increase in the time taken to lock a commit, the presence of a compound primary key defined on the table would increase the likelihood that it would have a dead end when it was updated . multiple threads at the same time?

Thanks for any help.

+8
sql sql-server
source share
3 answers

If you use composite PKs and insert large amounts of data in parallel, you may end up in a hash resource collision. See β€œThe Mysterious Case of a Complex Dead End and a Not-So-Logical Lock” for an example in the real world.

And to explain the conflict of hash resources, I will quote Remus Rusanu in "%% lockres %% probabilistic collision probability marker: 16,777,215" (recommended reading):

The lock manager in SQL Server does not know what it is blocking, it just blocks the resources (mostly rows). The operation of higher-level components, such as storage engine access methods, represents the resource to the lock manager and requests the required lock. When locking rows in a heap or b-tree, the storage engine will synthesize a "resource from record identifier". Since these resources are of limited length, the storage mechanism should reduce the effective key length to the maximum length that can be provided to the lock manager, which means that the record key will be reduced to 6 bytes. This is achieved by hashing the key into a 6-byte hash value.

[...]

There are 281,474,976,710,656 different possible values ​​in 6 bytes. Its a fairly large number? This is actually not the case. [...] So SQL %% lockres %% hash will produce two records with the same hash, with a probability of 50%, outside the table, any table, total 16777,215 records.

+5
source share

Since the SQL Server lock manager uses the value of lockhash (and not directly PK), I would conclude that there is no difference in locking using a single PK column compared to a composite PK.

Improving lock key collision minimization in SQL Server 2008R2 and its effect on concurrency

Unlike some other database providers, there is a logical component for SQL Server Servers Lock Manager. SQL Server uses the lockhash value to represent the lock structure lock in the SQL Server lock manager, rather than using the physical description for a row, page, or table. The lockhash value is stored in memory.

I would not be more likely to have a hash collision of a lockhash value from a single key column compared to a composite key. As indicated in the link with SQL 2008R2, the lock has been greatly improved and is specifically addressed with composite keys.

Prior to 2008R2, lockhash was less perfect for both single and composite keys.
It is good practice to keep your PC short.

.NET KeyValuePair and Tuple do not generate a good hash.

+3
source share

In general, I would say no, this is not with well-designed code. And the reason is that the causes of dead ends and the methods to avoid / eliminate them, as a rule, do not depend on time. Most deadlocks occur due to different update paths in the threads. For example, code A updates table1, and then updates table2, and code block B updates table2, and then updates table1. Methods to prevent this are related to ensuring that different code blocks try and update elements in the same order. In other words, avoid scenarios where thread 1 has a lock on A and wants to update B, while thread 2 has a lock on B and wants to update A.

However, if existing conflicting blocks / coding operators exist, I think compound keys can increase the frequency of deadlocks. Basically, the longer the transaction takes, the more time the other thread must block another resource and cause a deadlock.

Except in the case of a very small / specific edge, I do not think that composite keys will affect the occurrence of deadlocks (at least not in my experiments).

+2
source share

All Articles