NSManagedObjectContext performBlockAndWait: not executing in the background thread?

I have an NSManagedObjectContext declared like this:

- (NSManagedObjectContext *) backgroundMOC { if (backgroundMOC != nil) { return backgroundMOC; } backgroundMOC = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; return backgroundMOC; } 

Note that it is declared with the private queue type concurrency, so its tasks must be executed in the background thread. I have the following code:

 -(void)testThreading { /* ok */ [self.backgroundMOC performBlock:^{ assert(![NSThread isMainThread]); }]; /* CRASH */ [self.backgroundMOC performBlockAndWait:^{ assert(![NSThread isMainThread]); }]; } 

Why performBlockAndWait call performBlockAndWait a task in the main thread and not in the background thread?

+40
ios objective-c iphone core-data
Aug 6 2018-12-12T00:
source share
4 answers

performBlockAndWait another answer, try to explain why performBlockAndWait will always execute in the calling thread.

performBlock completely asynchronous. It always queues the queue for the receiving MOC, and then returns immediately. In this way,

 [moc performBlock:^{ // Foo }]; [moc performBlock:^{ // Bar }]; 

will place two blocks in the queue for moc. They will execute asynchronously. Some unknown thread will pull the blocks out of the queue and execute them. In addition, these blocks are wrapped in their own autocomplete pool, and they will also represent the entire Core Data user event ( processPendingChanges ).

performBlockAndWait does NOT use an internal queue. This is a synchronous operation performed in the context of the calling thread. Of course, he will wait until the current operations in the queue are completed, and then this block will be executed in the calling thread. This is documented (and confirmed in several WWDC presentations).

In addition, performBockAndWait is repeated, so nested calls occur in this calling thread.

Core Data engineers were very clear that the actual thread in which the queue-based MOC operation is performed does not matter. This is synchronization using the performBlock* API key.

So, consider "executeBlock" as "This block is placed in a queue that must be executed at some indefinite time, in some indefinite thread. The function will return to the caller as soon as it is queued"

performBlockAndWait : "This block will be executed for some indefinite time in the same thread. The function will return after this code has been completely executed (what will happen after the current queue associated with this MOC is discharged)."

EDIT

Are you sure that "executeBlockAndWait DOES NOT use an internal queue"? I think so. The only difference is that performBlockAndWait will wait for the block to complete. And what do you mean by calling a thread? In my opinion, [moc performBlockAndWait] and [moc performBloc] run in their private queue (main or main). An important concept here is the moc to which the queue belongs, and not vice versa around. Please correct me if I am wrong. - Philip007

Unfortunately, I formulated the answer the way I did, because, taken on my own, this is not true. However, in the context of the original question, this is correct. In particular, when calling performBlockAndWait in a private queue, the block will be executed in the thread that calls the function β€” it will not be queued and executed in the "private thread".

Now, before I even delve into the details, I want to emphasize that depending on the internal work of libraries it is very dangerous. All you really need is that you can never expect a particular thread to execute a block, with the exception of everything related to the main thread. Thus, expecting that performBlockAndWait will not execute on the main thread, it is not recommended because it will execute on the thread that called it.

performBlockAndWait uses GCD, but also has its own layer (for example, to prevent deadlocks). If you look at the GCD code (which is open source), you can see how synchronized calls work - and in general, they synchronize with the queue and call the block in the thread that calls this function - if the queue is not the main queue or global queue. In addition, during WWDC negotiations, Core Data engineers emphasize that performBlockAndWait will work in the calling thread.

So, when I say that it does not use an internal queue, this does not mean that it does not use data structures at all. It should synchronize the call with the blocks already in the queue, and those that are represented in other threads and other asynchronous calls. However, when calling performBlockAndWait it does not queue the block ... instead, it synchronizes access and launches the presented block in the thread that calls the function.

Now SO is not a good forum for this because it is a bit more complicated than that, especially wrt in the main queue and global GCD queues - but the latter is not important for Core Data.

The main thing is that when you call any function performBlock* or GCD, you should not expect that it will work in any particular thread (except that it is connected to the main thread), because the queues are not threads, but only the main queue will be run blocks on a specific thread.

When the main data is called, performBlockAndWait block will be executed in the calling thread (but will be synchronized accordingly with everything sent to the queue).

I hope this makes sense, although it probably just caused more confusion.

EDIT

In addition, you can see the unspoken consequences of this, because the way that performBlockAndWait provides re-entry support upsets the FIFO blocks. As an example...

 [context performBlockAndWait:^{ NSLog(@"One"); [context performBlock:^{ NSLog(@"Two"); }]; [context performBlockAndWait:^{ NSLog(@"Three"); }]; }]; 

Please note that strict adherence to the FIFO queue guarantee will mean that the nested performBlockAndWait (β€œThree”) will execute after the asynchronous block (β€œTwo”), since it was sent after the asynchronous block was sent. However, this is not the case as it would be impossible ... for the same reason, a deadlock occurs with nested dispatch_sync calls. You just need to know something if you are using the synchronous version.

In general, avoid synchronous versions whenever possible, because dispatch_sync can cause a deadlock, and any re-version, such as performBlockAndWait , will have to make some kind of β€œbad” decision to support it ... for example, have synchronization versions β€œjump” in queue.

+92
Aug 6 2018-12-12T00:
source share
β€” -

Why not? The Grand Central Dispatch concurrency paradigm (which I assume MOC uses internally) is designed so that only threads and the operating system need to worry about threads, not the developer (because the OS can do it better than you can to have more Details information). Too many people think that queues are the same as threads. They are not.

Blocks in the queue are not required to run in any thread (the exception is the blocks in the main queue that must be executed in the main thread). Thus, in fact, sometimes executable synchronization blocks (i.e., ExecuteBlockAndWait) will be executed in the main thread if the runtime is more efficient than creating a thread for it. Since you are still expecting a result, this will not affect how your program functioned if the main thread hung during the entire period of action.

In this last part, I’m not sure if I remember correctly, but in WWDC 2011 videos about GCD I believe that it was mentioned that the runtime will try to start the main thread, if possible, for synchronization operations because it is more efficient. In the end, although, I believe, only the people who developed the system can answer the answer "why."

+3
Aug 6 2018-12-12T00:
source share

I do not think that MOC is required to use a background thread; it is simply obligated to guarantee that your code will not face concurrency problems with MOC if you use performBlock: or performBlockAndWait: Since performBlockAndWait: should block the current thread, it seems reasonable to run this block on this thread.

0
Aug 6 '12 at 16:10
source share

Calling performBlockAndWait: only ensures that you execute the code so that you do not enter concurrency (i.e., on 2 threads performBlockAndWait: do not start at the same time, they will block each other).

The long and short of this is that you cannot depend on which thread the MOC operation runs on, and basically anyway. I diligently studied that if you use GCD or just straightforward streams, you always need to create local MOCs for each operation and then combine them with the master MOC.

There is a large library ( MagicalRecord ) that makes this process very simple.

0
Aug 06 '12 at 16:20
source share



All Articles