Using dispatch_sync in Grand Central Dispatch

Can anyone explain with really clear use dispatch_sync what the purpose of dispatch_sync in the GCD ? I cannot understand where and why I would have to use this.

Thank!

+71
ios cocoa ios4 grand-central-dispatch
Jan 05 '11 at 17:26
source share
8 answers

You use it when you want to execute a block and wait for results.

One example of this is a pattern in which you use a send queue instead of locks for synchronization. For example, suppose you have a shared NSMutableArray a , with access mediated by q dispatch queue. A background thread can be added to an array (async), while a foreground thread pulls out the first element (synchronously):

 NSMutableArray *a = [[NSMutableArray alloc] init]; // All access to `a` is via this dispatch queue! dispatch_queue_t q = dispatch_queue_create("com.foo.samplequeue", NULL); dispatch_async(q, ^{ [a addObject:something]; }); // append to array, non-blocking __block Something *first = nil; // "__block" to make results from block available dispatch_sync(q, ^{ // note that these 3 statements... if ([a count] > 0) { // ...are all executed together... first = [a objectAtIndex:0]; // ...as part of a single block... [a removeObjectAtIndex:0]; // ...to ensure consistent results } }); 
+74
Jan 05 '11 at 18:18
source share

First understand your brother dispatch_async

 //Do something dispatch_async(queue, ^{ //Do something else }); //Do More Stuff 

You use dispatch_async to create a new thread. When you do this, the current thread will not stop. This means that //Do More Stuff can execute before //Do something else finish

What happens if you want to stop the current thread?

You do not use sending at all. Just write the code usually

 //Do something //Do something else //Do More Stuff 

Now say that you want to do something in the DIFFERENT stream, and still wait, as if to make sure that the files are executed sequentially .

There are many reasons for this. User interface updates, for example, are performed in the main thread.

What are you using dispatch_sync

 //Do something dispatch_sync(queue, ^{ //Do something else }); //Do More Stuff 

Here you got //Do something //Do something else and //Do More Stuff executed sequentially, although //Do something else is executed on another thread.

Usually, when people use a different thread, the whole goal is that something can be done without waiting. Suppose you want to load a lot of data, but you want the user interface to be smooth.

Therefore, dispatch_sync is rarely used. But it is there. I personally have never used this. Why not ask for sample code or a project that uses dispatch_sync.

+72
05 Oct '12 at 4:15
source share

dispatch_sync is semantically equivalent to a traditional mutex lock.

 dispatch_sync(queue, ^{ //access shared resource }); 

works the same way

 pthread_mutex_lock(&lock); //access shared resource pthread_mutex_unlock(&lock); 
+23
Jan 06 2018-11-11T00:
source share

If you want some practical examples to consider my question:

How to resolve this deadlock that occurs ocassionally?

I solve it by ensuring that my main managedObjectContext file is created in the main thread. The process is very fast, and I do not mind waiting. Not expecting means that I will have to deal with a lot of problems with the discount.

I need dispatch_sync because some code needs to be done in the main thread, which is a different thread than the one where the code is executing.

So basically, if you want code 1. Act as usual. You do not want to worry about racing conditions. You want to make sure the code has been completed before moving on. 2. Made in another topic

use dispatch_sync.

If 1 is broken, use dispatch_async. If 2 is broken, just write the code as usual.

So far, I only do this once, namely, when something needs to be done in the main thread.

So here is the code:

 +(NSManagedObjectContext *)managedObjectContext { NSThread *thread = [NSThread currentThread]; //BadgerNewAppDelegate *delegate = [BNUtilitiesQuick appDelegate]; //NSManagedObjectContext *moc = delegate.managedObjectContext; if ([thread isMainThread]) { //NSManagedObjectContext *moc = [self managedObjectContextMainThread]; return [self managedObjectContextMainThread]; } else{ dispatch_sync(dispatch_get_main_queue(),^{ [self managedObjectContextMainThread];//Access it once to make sure it there }); } // a key to cache the context for the given thread NSMutableDictionary *managedObjectContexts =[self thread].managedObjectContexts; @synchronized(self) { if ([managedObjectContexts objectForKey:[self threadKey]] == nil ) { NSManagedObjectContext *threadContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; threadContext.parentContext = [self managedObjectContextMainThread]; //threadContext.persistentStoreCoordinator= [self persistentStoreCoordinator]; //moc.persistentStoreCoordinator;// [moc persistentStoreCoordinator]; threadContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy; [managedObjectContexts setObject:threadContext forKey:[self threadKey]]; } } return [managedObjectContexts objectForKey:[self threadKey]]; } 
+3
Oct. 12
source share

David Gelhar did not say that his example would work only because he calmly created the next queue (passed NULL to dispatch_queue_create, which equals DISPATCH_QUEUE_SERIAL).

If you want to create a parallel queue (to get all the multithreading), its code will crash due to the NSArray (addObject :) mutation during the mutation (removeObjectAtIndex :) or even bad access (NSArray range is out of scope). In this case, we must use a barrier to provide exclusive access to NSArray while both units are running. It not only excludes all other entries in NSArray during its launch, but also excludes all other reads, making a safe modification.

An example for a parallel queue should look like this:

 NSMutableArray *a = [[NSMutableArray alloc] init]; // All access to `a` is via this concurrent dispatch queue! dispatch_queue_t q = dispatch_queue_create("com.foo.samplequeue", DISPATCH_QUEUE_CONCURRENT); // append to array concurrently but safely and don't wait for block completion dispatch_barrier_async(q, ^{ [a addObject:something]; }); __block Something *first = nil; // pop 'Something first' from array concurrently and safely but wait for block completion... dispatch_barrier_sync(q, ^{ if ([a count] > 0) { first = [a objectAtIndex:0]; [a removeObjectAtIndex:0]; } }); // ... then here you get your 'first = [a objectAtIndex:0];' due to synchronised dispatch. // If you use async instead of sync here, then first will be nil. 
+3
Jan 29 '14 at 16:40
source share

dispatch_sync is mainly used inside the dispatch_async block to perform some operations on the main thread (for example, update ui).

 dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ //Update UI in main thread dispatch_sync(dispatch_get_main_queue(), ^{ self.view.backgroundColor = color; }); }); 
+3
Jan 08 '16 at 18:37
source share

Here's a realistic halfway example. You have 2000 zip files that you want to parse in parallel. But the zip library is not thread safe. Therefore, all work related to the zip library goes into the unzipQueue . (An example is in Ruby, but all calls are displayed directly in the C. library "apply", for example, maps on dispatch_apply (3) )

 #!/usr/bin/env macruby -w require 'rubygems' require 'zip/zipfilesystem' @unzipQueue = Dispatch::Queue.new('ch.unibe.niko.unzipQueue') def extractFile(n) @unzipQueue.sync do Zip::ZipFile.open("Quelltext.zip") { |zipfile| sourceCode = zipfile.file.read("graph.php") } end end Dispatch::Queue.concurrent.apply(2000) do |i| puts i if i % 200 == 0 extractFile(i) end 
0
Jun 16 2018-11-11T00:
source share

I used dispatch synchronization when inside an asynchronous send to signal the user interface changes back to the main thread.

My asynchronous block is a bit restrained, and I know that the main thread knows about the user interface changes and will implement them. This is usually used in a code processing unit that takes some processor time, but I still want to change the user interface inside this unit. The action of user interface changes in an asynchronous block is useless, because, as it seems to me, the user interface works in the main thread. In addition, their operation as secondary asynchronization or self-delegation units leads to the fact that the user interface views them only a few seconds later, and it looks belated.

Block example:

 dispatch_queue_t myQueue = dispatch_queue_create("my.dispatch.q", 0); dispatch_async(myQueue, ^{ // Do some nasty CPU intensive processing, load file whatever if (somecondition in the nasty CPU processing stuff) { // Do stuff dispatch_sync(dispatch_get_main_queue(),^{/* Do Stuff that affects UI Here */}); } }); 
-one
Jan 02 '13 at 10:02
source share



All Articles