MongoDB lock is different
Locking in MongoDB does not work, like locking in a DBMS, so a little explanation is in order. In earlier versions of MongoDB, there was one global reader / writer latch. Starting with MongoDB 2.2, there is a reader / writer latch for each database.
Reader / writer latch
A latch is a device with multiple readers, a single writer, and a greedy writer. It means that:
- The database can have an unlimited number of simultaneous readers.
- Each instance can contain only one creator in any database (a little more about this)
- Writers Block Readers
- By "writer-greedy" I mean that after entering a write request, all readers are blocked until the recording is complete (more on this later)
Please note that I call this a “latch”, not a “lock”. This is due to the fact that it is lightweight, and in a correctly designed circuit, record lock is stored on the order of ten microseconds. See here for more details on read-write lock.
In MongoDB, you can run as many simultaneous requests as you want: as long as the corresponding data is in RAM, all of them will be satisfied without blocking conflicts.
Atomic Document Update
Recall that in MongoDB, the transaction level is one document. All updates for one document - Atomic. MongoDB achieves this by holding the write latch only as long as updating one document in RAM is required. If slow work is performed (in particular, if a document or index record needs to be loaded from disk), then this operation will give a write latch. When the operation gives a latch, the next operation in the queue may continue.
This means that records in all documents in one database become serialized. This can be a problem if you have a bad circuit design and your records are time consuming, but in a properly designed circuit, locking is not a problem.
Writer-Greedy
A few more words about being a greedy writer:
Only one writer can hold the latch at a time; multiple readers can hold the latch at a time. In a naive implementation, authors can starve endlessly if only one reader works in the work. To avoid this, in a MongoDB implementation, as soon as any one thread makes a write request for a specific latch
- All subsequent readers who need this latch will lock
- This writer will wait for the completion of all current readers.
- The writer will acquire the recording latch, perform its work, and then release the recording latch
- Now all the queues in the queue are reading
Actual behavior is complex because this writer's greedy behavior interacts with the assignment in ways that may not be obvious. Recall that starting from version 2.2 there is a separate latch for each database, therefore writing to any collection in database "A" will receive a separate latch than writing to any collection in database "B".
Concrete questions
Regarding specific issues:
- Locks (actually latches) are stored by the MongoDB core only as long as updating a single document is required.
- If you have multiple connections to MongoDB, and each of them performs a series of records, the latch will be stored on the basis of each database only until it is required to complete this record
- All connections made during recording (update / insert / delete) will alternate
Although it sounds like it will be a big performance issue, in practice it does not slow down. With a properly designed circuit and typical workload, MongoDB will saturate disk I / O — even for SSDs — before the block percentage on any database exceeds 50%.
The greatest potential of the MongoDB cluster that I know of is currently doing 2 million records per second.