Dispatch_queue_set_specific versus getting the current queue

I am trying to make out the difference and usage between these 2:

static void *myFirstQueue = "firstThread"; dispatch_queue_t firstQueue = dispatch_queue_create("com.year.new.happy", DISPATCH_QUEUE_CONCURRENT); dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL); 

Question number 1

What is the difference between this:

 dispatch_sync(firstQueue, ^{ if(dispatch_get_specific(myFirstQueue)) { //do something here } }); 

and the following:

 dispatch_sync(firstQueue, ^{ if(firstQueue == dispatch_get_current_queue()) { //do something here } }); 

?

Question number 2:

Instead of using the above (void*) myFirstQueue in

 dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL); 

Is it possible to use static int * myFirstQueue = 0; ?

My reasoning is based on the fact that:

dispatch_once_t also 0 (is there any correlation here? By the way, I still don't understand why dispatch_once_t should be initialized to 0, although I already read SO questions here).

Question number 3

Can you give me an example of the GCD dead end here?

Question number 4

It may be too much to ask; I’ll ask anyway, in case someone accidentally finds out on his head. If not, it would be nice to leave this part unanswered.

I have not tried this because I really do not know how to do this. But my concept is this:

In any case, we can "put the descriptor" in some queue, which allows us to still hold the descriptor on it and, thus, be able to detect when a deadlock occurs after the queue is allocated; and when there is, and since we got the queue descriptor that we previously installed, could we somehow do something to unlock the deadlock?

Again, if this is too much to answer, either this, or if my reasoning is completely canceled / disabled here (in Question No. 4 ), feel free to leave this part unanswered.

Happy New Year.


@ san.t

With static void *myFirstQueue = 0;

We doing this:

 dispatch_queue_set_specific(firstQueue, &myFirstQueue, &myFirstQueue, NULL); 

It’s completely clear.

But if we do this:

 static void *myFirstQueue = 1; //or any other number other than 0, it would be OK to revert back to the following? dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL); 

Regarding dispatch_once_t :

Could you tell us more about this:

Why should dispatch_once_t be 0 , and how and why should it act like a logical one at a later stage? Is this due to memory / security or the fact that the previous memory address was occupied by other objects that were not 0 ( nil )?

Regarding question number 3:

Sorry, as I may not be completely clear: I didn’t mean that I ran into a dead end. I meant if anyone could show me a GCD code script that leads to a deadlock.

And finally:

I hope you answered question 4. If not, as mentioned earlier, its OK.

+6
source share
2 answers

First of all, I really don’t think you wanted to make this queue parallel. dispatch_sync() Joining a parallel queue doesn’t actually do a lot of things (parallel queues do not guarantee the order between the blocks running on them). So the rest of this answer suggests that you should have another queue. In addition, I am going to answer this in general terms, and not your specific questions; hope good :)

There are two main problems when using dispatch_get_current_queue() . One very wide one, which can be summed up as “recursive locking is a bad idea” and one specific to sending, which can be summed up as “you can and will often have more than one current queue”.

Problem # 1: Recursive locking is a bad idea

The usual goal of a private sequential queue is to protect the invariant of your code (an “invariant” that is “something that must be true”). For example, if you use a queue to protect access to a property so that it is thread safe, then the invariant “this property does not have an invalid value” (for example: if the property is a structure, then half the structure may have a new value, and half may have the old value if it was specified from two threads at the same time. A sequential queue forces one thread or the other to finish tuning the entire structure before the other can start).

We can conclude that in order for this to make sense, the invariant must be executed at the beginning of the execution of the block in a sequential queue (otherwise it was clearly not protected). As soon as the block starts to execute, it can break the invariant (for example, set the property) without fear of spoiling any other threads if the invariant is held back by the time it returns (in this example, the property must be fully set).

To summarize, just to make sure that you are still following: at the beginning and end of each block in a sequential queue, the invariant protecting the queue must be preserved. In the middle of each block, it can be broken.

If inside a block you call something that is trying to use a thing protected by a queue, then you have changed this simple rule to a much more complex one: instead of "at the beginning and at the end of each" it "block at the beginning, at the end, and at any point where this a block causes something outside of itself In other words, instead of thinking about your thread safety at the block level, you now need to examine each individual line of each block.

What does this have to do with dispatch_get_current_queue() ? The only reason to use dispatch_get_current_queue() here is to check "are we already in this queue?", And if you are already in the current queue, then you are already in a terrible situation above! So don’t do it. Use private queues to protect things and do not call them from other code. You should already know the answer to the question: "Am I in this line?" and it should be no.

This is the biggest reason dispatch_get_current_queue() deprecated: so people don’t try to simulate a recursive lock (what I described above) with it.

Problem number 2: you can have more than one current queue!

Consider this code:

 dispatch_async(queueA, ^{ dispatch_sync(queueB, ^{ //what is the current queue here? }); }); 

Obviously, queueB is current, but we are still in the queue! dispatch_sync makes queueA wait for queueB to complete, so they are both effectively "current".

This means that this code will be blocked:

 dispatch_async(queueA, ^{ dispatch_sync(queueB, ^{ dispatch_sync(queueA, ^{}); }); }); 

You can also have several current queues using target queues:

 dispatch_set_target_queue(queueB, queueA); dispatch_sync(queueB, ^{ dispatch_sync(queueA, ^{ /* deadlock! */ }); }); 

What is really needed here is something like the hypothetical " dispatch_queue_is_synchronous_with_queue(queueA, queueB) ", but since it would only be useful for implementing recursive locking, and I have already described how this bad idea ... is unlikely to be added.

Please note: if you use only dispatch_async() , then you are immune to deadlocks. Unfortunately, you are not at all protected from race conditions.

+51
source

Question 1 : Two pieces of code do the same thing that "does some work" when the block really works in firstQueue . However, they use different ways to determine that it works on firstQueue , the first sets the context to non- NULL ( (void*)myFirstQueue ) with a specific key ( myFirstQueue ), and then checks that the context is really not NULL ; the second check is performed using the deprecated dispatch_get_current_queue function. The first method is preferred. But then it seems unnecessary to me, dispatch_sync already guarantees that the block will work in firstQueue .

Question 2 : just use static int * myFirstQueue = 0; not good, so myFirstQueue is a NULL pointer, and dispatch_queue_set_specific(firstQueue, key, context, NULL); requires non- NULL key and context execution. However, it will work with minor changes as follows:

 static void *myFirstQueue = 0; dispatch_queue_t firstQueue = dispatch_queue_create("com.year.new.happy", DISPATCH_QUEUE_CONCURRENT); dispatch_queue_set_specific(firstQueue, &myFirstQueue, &myFirstQueue, NULL); 

this will use the address of the variable myFirstQueue as a key and context.

If we do this:

 static void *myFirstQueue = 1; //or any other number other than 0, it would be OK to revert back to the following? dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL); 

I think this will be fine, since both myFirstQueue pointers myFirstQueue not be dereferenced if the last destructor parameter is NULL

dispatch_once_t also 0 has nothing to do with this. At first it is 0 and after sending it once, the value will change to zero, essentially acting as a logical one.

Here are excerpts from once.h , you can see that dispatch_once_t is actually long , and that the Apple implementation detail requires it to be initially 0 , possibly because static and global variables are zero by default. And you can see that there is a line:

 if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) { 

essentially, the once predicate check is still zero before the dispatch_once function is called. This is not related to memory security.

 /*! * @typedef dispatch_once_t * * @abstract * A predicate for use with dispatch_once(). It must be initialized to zero. * Note: static and global variables default to zero. */ typedef long dispatch_once_t; /*! * @function dispatch_once * * @abstract * Execute a block once and only once. * * @param predicate * A pointer to a dispatch_once_t that is used to test whether the block has * completed or not. * * @param block * The block to execute once. * * @discussion * Always call dispatch_once() before using or testing any variables that are * initialized by the block. */ #ifdef __BLOCKS__ __OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0) DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW void dispatch_once(dispatch_once_t *predicate, dispatch_block_t block); DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW void _dispatch_once(dispatch_once_t *predicate, dispatch_block_t block) { if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) { dispatch_once(predicate, block); } } #undef dispatch_once #define dispatch_once _dispatch_once #endif 

Question 3 : assuming myQueue is sequential, concurrent queues are ok.

 dispatch_async(myQueue, ^{ dispatch_sync(myQueue, ^{ NSLog(@"This would be a deadlock"); }); }); 

Question 4 : not sure about this.

+4
source

All Articles