Does blocking with many different objects affect impact efficiency compared to one?

I do not know if the question is stupid or not, the lock and the monitor are a black box for me.

But I am dealing with a situation where I can either use the same lock object to lock all the time, or use an unlimited number of objects to lock at a finer level of grains.

I know that the second way will reduce the lock conflict, but I can end up using 10K objects as locks, and I don’t know if this has an effect or not.

Bottom line: too many locks block the lock or have no effect?

Edit

I wrote a lib that supports a graph of objects, the number can be very high. This is not thread safe at the moment, mainly for the reason Eric talked about in his comment.

Initially, I thought that if the user wants to do multithreading, then he will have to take care of the lock.

But now I wonder if I need to make it thread safe, what would be the best way to do it (note that making it thread safe will not be a short and easy trip for me so testing both solutions is something that I cannot easy to do)?

Since the goal is to make each graph object thread safe, I could use an instance of the object to lock when I want to access / change its properties. I know this is the best way to reduce competition, but I don’t know if it will scale in the same way as having only one lock for the entire chart.

I know that there is a lot to consider, how many threads and especially (I think) the likelihood that the object will be accessible / modified by several threads at a time (which, in my estimation, is pretty low). But I can’t find the exact information about the locks and their overhead in this case.

+8
multithreading c #
source share
3 answers

To get a clearer picture of what’s going on, I looked at the source code for the Monitor class and its C ++ instance in clr/src/vm/syncblk.cpp in the Microsoft shared sharing infrastructure.

To answer my own question: no, the presence of a large number of locks will not harm in any harmful way that I could think of.

What I learned:

1) A castle that is already occupied by the same thread is processed "almost for free."

2) The lock accepted for the first time is basically the cost of InterlockedCompareExchange .

3) Several threads awaiting blocking are cheap enough for tracking (list of links supported, O (1) complexity).

4) The thread waiting for the lock to be released is by far the most expensive precedent, the first call that must be made to try to get out, but if this is not enough, the thread will switch, mutex signals that the time is waking up due to the lock.

I got my answer by digging 2): if you always block the same object or 10K the other, it is basically the same (additional initialization is performed the first time this object is blocked, but this is not too bad). InterlockedCompareExchange does not care about being called in the same or another memory location (AFAIK).

Conflict is by far the most important issue. Having many locks would reduce (in my case) the likelihood of a dispute, so this can only be good.

1) is also an important scientific lesson: if I lock / unlock for each property / access change, I can improve performance by first locking the object and then changing many properties and releasing the lock. Thus, there will be only one InterlockedCompareExchange, and locking / unlocking inside the implementation of changing properties / access will only increase the internal counter.

To dig deeper, I would have to find more information on the implementation of InterlockedCompareExchange, I think that it relies on instructions for building the processor ...

+4
source share

Typically, performance issues around locking are related to competition. The acquisition of undeniable blocking is about 10 seconds from a nanosecond. Conflict is a real killer. As you noticed, having more locks (a higher degree of blocking) can improve performance by reducing competition.

The disadvantage of several locks is, as a rule, managing the lock should be more complicated. If several locks are required to complete the operation, there is an increased likelihood of problems with resource hunger, such as deadlocks or waiting. Proper lock management, such as forcing lock ordering, can alleviate these problems.

Lack of more detailed information, I would probably go with one lock, since the implementation is simplified and controls the performance of my application. In particular, there are .NET performance counters related to the lock conflict, which can help diagnose / detect lock problems related to the first problems.

+1
source share

As with all performance related answers, I would like to refer to this excepional blog post post by Eric Lippert . Take a look at his six questions, what are the answers in your case? Try what happens during your conditions.

The number of cores, approval, caching, etc., are all questions, so look what will happen to you in your case, it is really impossible to find out in advance.

For those who don’t click on the link; run their horses !

I'm not talking about performance, as in speed here, but rather about what happens when the application is running for a while. According to the internal Lock (Monitor) implementation in .NET, the Monitor implementation is pretty smart in .NET, so having internal locks for each object might seem like a viable approach, since you said the objects are in tens of thousands, not millions.

Bottom line: too many locks block the lock or have no effect?

Not at its discretion, but this may be a reason to look at the architecture of your program, since simultaneously blocked gazillion objects can cause overhead.

-one
source share

All Articles