Understanding dispatch_sync and global queues

I am studying the GCD mechanism, and I have a couple of questions. I would be grateful if you correct me, if I am wrong.

1. question) As far as I know, GCD has 4 global parallel queues with different priorities. For example, when we write DISPATCH_QUEUE_PRIORITY_HIGH , we get one of these queues. These queues are not empty, some of them work for them. Thus, when we add a block of code to some of these queues, it can be, for example, n is a numeric task in a string, when n is a random integer.

Now when we add a block of code like

 dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ // Heavy calculations }); 

in viewDidLoad , all UI components will be locked until:

  • Apple system tasks will be completed (because we will add our task to this queue and must wait for other system tasks to complete)
  • Until our code completes.

I'm right? I know we should use dispatch_async here, I'm just wondering how everything works.

2. question) . As far as I know, all global queues are parallel queues, which means that they can manage tasks either through the context switch or parallelism. However, when we get to this queue through dispatch_sync, we are forced to wait until everything is done. The only thing that differs from serial queues in this case is the order of operations. For example, if there is task 1, task 2, task 3, and task 4 in a sequential queue, it will do it strictly in order, but a parallel queue can change its order in order to perform light operations first.

So my questions are: why should we ever do dispatch_sync ? In my opinion, the main thread will be blocked until the dispach_sync code dispach_sync .

+6
source share
1 answer

GCD has 4 global parallel queues with different priority. For example, when we write DISPATCH_QUEUE_PRIORITY_HIGH , we get one of these queues. These queues are not empty; some Apple processes run on them.

At any given time, queues can be empty or not. There is no way to find out. Yes, frameworks can add things to these queues just like your code.

However, the queue does not work. A queue is a data structure. He performs tasks in order. GCD manages a set of workflows, creating new ones or, if necessary, exiting them. These workflows remove tasks from the queues and complete them.

when we add a code block for example

 dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ // Heavy calculations }); 

in viewDidLoad , all UI components will be blocked until: 1 - Apple system tasks are completed (because we will add our task to the queue and must wait for other system tasks to complete). 2 - Until our code is completed.

As the name suggests, dispatch_sync() is synchronous. This means that it does not return until it completes the work that he requested to do (the block that you transferred). Whether or not to wait for any other tasks in the queue depends on the available system resources. As you noticed, the queue is parallel, so tasks can be deleted to run simultaneously. If there are enough free processor cores, GCD can run enough worker threads to simultaneously run all the tasks in the queue. Thus, your task should not wait for the completion of other tasks, it just has to wait until these tasks are started (slip out of the head of the queue), and there will be an available workflow.

You just have to wait for other tasks to be completed if all system resources (for example, CPU cores) are occupied.

As far as I know, all global queues are parallel queues, which means that they can manage tasks either through the context switch or parallelism. However, when we get to this queue through dispatch_sync, we are forced to wait until all the work is done.

No, this is wrong, as I explained above. The only thing you know needs to be completed before dispatch_sync() returns - this is the one task you presented with it. It should not wait for any other tasks in this queue if all processor cores are busy.

The only thing that differs from sequential queues in this case is the order of operations. For example, if there is task 1, task 2, task 3, and task 4 in a sequential queue, it will do it strictly in order, but a parallel queue can change its order in order to perform light operations first.

No. Simultaneous queues start operations in order, just like sequential queues. It is simply that a sequential queue does not start another operation until the current one, if any, is completed. A global parallel queue will allow all of its operations to start and run at the same time, up to available resources. Queues do not have the ability to find out if the operation is easier.

So my questions are: why should we do dispatch_sync? In my opinion, the main thread will be blocked until the dispach_sync code dispach_sync .

Concurrency, and synchronous behavior are two separate concepts. Synchronous and asynchronous determines the behavior of the caller . It determines whether the caller is allowed to act until completion.

Parallel and sequential determines how running tasks are performed. A simultaneous queue allows tasks to run simultaneously with each other. A sequential queue allows you to simultaneously launch one of its tasks.

It makes sense to call dispatch_sync() from the main thread, but you have to be careful. This may be necessary, for example, when using a sequential queue to synchronize access to a data structure that is shared by multiple threads. The general rule is that you need to avoid blocking the main thread for a long time. It is normal to block it if you have every reason to believe that it will be for very short periods that the user cannot understand.

You definitely do not want to use dispatch_sync() from the main thread for Heavy Computing, as you put it.

In general, you use dispatch_sync() when you need a task to complete before you can continue. Often you can restructure your code, use dispatch_async() instead and put the following code in the task as a continuation step (or completion handler). But you can’t always do it.

+12
source

All Articles