Why is Double-Checked Locking used?

I continue to work on code that uses two-check locking, and I'm still confused about why it is used at all.

I initially did not know that the double check lock is locked , and when I found out about it, this raised this question for me: why do people use it in the first place? Is comparison and exchange better?

if (field == null) Interlocked.CompareExchange(ref field, newValue, null); return field; 

(My question applies to both C # and Java, although the code above is for C #.)

Does double checking blocking have some peculiar advantage over atomic operations?

+4
source share
5 answers

Well, the only advantage that comes to my mind is the (illusion) of performance: check the non-thread safe way, then do some locking operations to check the variable, which can be expensive. However, since the double lock check is broken in such a way that excludes any solid conclusions from the non-thread check, and in any case it always hit me from premature optimization, I would not demand any advantages, no advantages - this is an outdated preliminary validation, Java-days idioms - but I would like to fix it.

Edit: to be clear (er), I believe that double-checked locking is an idiom that has evolved as a performance increase in locking and checking every time and, roughly speaking, is close to the same as unencapsulated comparison and replacement. I personally am also a fan of encapsulating synchronized sections of code, so I think it's better to call another operation to do the dirty work.

+1
source

Does double checking blocking have some peculiar advantage over atomic operations?

(This answer only covers C #, I don't know what the Java memory model is.)

The fundamental difference is a potential race. If you have:

 if (f == null) CompareExchange(ref f, FetchNewValue(), null) 

then FetchNewValue () can be called arbitrarily many times for different threads. One of these topics wins the race. If FetchNewValue () is extremely expensive, and you want it to be called only once, then:

 if (f == null) lock(whatever) if (f == null) f = FetchNewValue(); 

Ensures that FetchNewValue is called only once.

If I personally want to perform lazy initialization with a low lock, then I do what you offer: I use an interconnected operation and live with a rare race condition, when two threads start the initializer, and only one wins. If this is unacceptable, I use locks.

+11
source

In C #, it was never broken, so we can ignore it for now.

The code you posted suggests that newValue is already available, or a cheep for (re) computing. A double check of the lock ensures that only one thread actually performs initialization.

However, in modern C #, I usually prefer to use Lazy<T> to work with initialization.

+4
source

A double lock check is used when performance deteriorates when locking throughout the method. In other words, if you do not want to synchronize the object (to which the method is called) or the class, you can use a double-check lock.

This may be so if there is a lot of controversy for locking and when a resource protected by locking is expensive to create; I would like to postpone the creation process until it is needed. A double lock test improves performance by first checking a condition (tooltip) to help determine if a lock should be installed.

The double lock check was broken in Java before Java 5 when the new memory model was introduced. Until then, it was entirely possible that the hint of blocking would be true in one thread and false in another. In any case, the idiom Initialization on Demand of the Owner> is a suitable replacement for the double-check lock pattern; This is much easier for me to understand.

+1
source

โ€œAt some level, it makes senseโ€ that a value that only changed at startup should not be locked for access, but then you should add some kind of lock (which you probably won't need) just in case two threads try access it at startup, and it works most of the time. He broke it, but I can understand why his light trap gets inside.

0
source

All Articles