SELECTs cannot block other SELECTs because they only get general locks. You say that we must take into account that these SELECTs now require exclusive read locks, but this is impossible for us because 1) there are no such things as exlusive read lock and 2) reads do not acquire exclusive locks.
But you pose a more general question, whether simple statements can be blocked. The answer is a definite, loud YES . Locks are acquired at execution, are not analyzed in advance and sorted, and then purchased in a specific order. It would be impossible for the engine to know the necessary locks in advance, because they depend on the actual data on the disk and for reading the data that the engine needs to lock the data.
Dead ends between simple statements (SELECt versus UPDATE or SELECT or DELETE) due to the different order of access to the index are quite common and very easy to examine, diagnose and correct. But note that there is always a write operation, since reads cannot block each other. For this discussion, the addition of an UPDLOCK or XLOCK hint of SELECT should be considered a record. You don’t even need a JOIN, the secondary index may well introduce an access order problem leading to a deadlock, see Reading / Writing a Deadlock .
Finally, the SELECT FROM A JOIN B record or the SELECT FROM B JOIN A record is completely irrelevant. The query optimizer is free to change the access order at its discretion, the actual text of the request does not impose the execution order in any way.
Update
How then can we build a general strategy regarding the READING COMMITTEE database "multiple entity", which is not inhibited?
I am afraid that the cookie cutter recipe does not exist. The decision will depend on case by case. Ultimately, in database applications, deadlocks are a fact of life. I understand that this may seem absurd, because "we landed on the moon, but we cannot write the right database application," but there are strong factors in the game that pretty much guarantee that applications will end up with deadlocks. Successful deadlocks are the easiest way to handle errors, just read the state, apply logic, rewrite the new state. Now that they say, there are some good practices that can significantly reduce the frequency of deadlocks, to the point that they have almost disappeared:
- Try creating a consistent access pattern for Writes. Well-defined rules that specify things like “transaction, should always have tables in the following order:
Customers → OrderHeaders → OrderLines . 'Note that the order must be executed within the transaction. Basically, classify all the tables in your schema and specify that all updates must be performed in order of ranking, which ultimately boils down to the code discipline of the individual contributor writing the code, as he must ensure that he writes, updates the sin in the proper order within the transaction. - Reduce the recording time. The usual wisdom is this: at the beginning of the transaction, do all the readings (read the existing state), then process the logic and calculate new values, and then write down all the updates at the end of the transaction. Avoid patterns like "read-> write-> logic-> read-> write", instead of "read-> read-> logic-> write-> write". Of course, true mastery is how to deal with real, real, individual cases, when, apparently, you need to make notes in the middle of a transaction. A special note here should be said about the specific type of transaction: those that are controlled by the queue, which by the very definition begin their activity by dequeueing (= a write) from the queue. These applications have always been sadly printed and prone to errors (especially dead ends); fortunately, there are ways to do this, see Using Tables as Queues .
- Reduce the number of readings. Table scans are the most common cause of deadlocks. Proper indexing will not only eliminate deadlocks, but can also improve process performance.
- Snapshot isolation . This is the closest thing you get for a free lunch to avoid dead ends. I intentionally set it last because it can mask other problems (for example, incorrect indexing) instead of fixing them.
Trying to solve this problem with the LockCustomerByXXX approach, I'm afraid it does not work. Pessimistic blocking does not scale. Optimistic concurrency updates are the way to go if you want decent performance.