Mike Ash Singleton: posting @ synchronized

I met this on Mike Ashe "Care and Nutrition for the Lonely" and was a little puzzled by his comment:

This code is a bit slow. Taking the castle is somewhat expensive. To make this more painful is the fact that the vast majority of the time, a castle is useless. Locking is only required when foo is zero, which is basically only once. After a singleton is initialized, the castle needs to be gone, but the castle itself remains.

+(id)sharedFoo { static Foo *foo = nil; @synchronized([Foo class]) { if(!foo) foo = [[self alloc] init]; } return foo; } 

My question is, and there is no reason for this, but why can't you write (see below) to limit the lock when foo is nil?

 +(id)sharedFoo { static Foo *foo = nil; if(!foo) { @synchronized([Foo class]) { foo = [[self alloc] init]; } } return foo; } 

cheers gary

+7
objective-c cocoa double-checked-locking
source share
5 answers

Because then the test is subject to the condition of the race. Two different threads can independently test that foo is nil , and then (sequentially) create separate instances. This can happen in your modified version when one thread is running a test and the other is still inside +[Foo alloc] or -[Foo init] but foo is not yet installed.

By the way, I would not do that. Check out the dispatch_once() function, which ensures that the block is executed only once during the lifetime of your application (assuming you have a GCD on the platform that you are targeting).

+18
source share

This is called the double-checked lock "optimization" . As documented throughout, it is unsafe. Even if it is not defeated by optimizing the compiler, it will be defeated as the memory works on modern machines, unless you use some kind of fence / barriers.

Mike Ash also shows the correct solution using volatile and OSMemoryBarrier(); .

The problem is that when one thread executes foo = [[self alloc] init]; , there is no guarantee that when another thread sees foo != 0 , all memory entries made with init are also visible.

Also see DCL and C ++ and DCL and java for more details.

+7
source share

In your version, checking for !foo can occur simultaneously in several threads, allowing two threads to go to the alloc block, one of which waits for the other to finish before the other instance is allocated.

+1
source share

You can optimize only by taking a lock if foo == nil, but after that you need to test again (in @synchronized) to protect against race conditions.

 + (id)sharedFoo { static Foo *foo = nil; if(!foo) { @synchronized([Foo class]) { if (!foo) // test again, in case 2 threads doing this at once foo = [[self alloc] init]; } } return foo; } 
+1
source share

Best way if you have grand cenral dispatch

 + (MySingleton*) instance { static dispatch_once_t _singletonPredicate; static MySingleton *_singleton = nil; dispatch_once(&_singletonPredicate, ^{ _singleton = [[super allocWithZone:nil] init]; }); return _singleton } + (id) allocWithZone:(NSZone *)zone { return [self instance]; } 
+1
source share

All Articles