Increases async expectations. Context switching

I know how async works. I know that when execution reaches expectations, it frees the thread and after I / O completes, it selects the thread from threadpool and runs the remaining code. Thus, threads are used efficiently. But I am confused in some cases:

  • Should we use async methods for a very fast I / O method like cache read / write method? Won't they lead to unnecessary context switching. If we use the synchronization method, execution will end in the same thread, and the context switch may not happen.
  • Does Async-wait only conserve memory (by creating smaller threads). Or does it also save CPU? As far as I know, in the case of IO synchronization, while IO is taking place, the thread goes into sleep mode. This means that it does not consume a processor. Is this understanding correct?
+7
multithreading c # asynchronous async-await
source share
2 answers

Should we use async methods for a very fast I / O method like cache read / write method?

Such an IO will not be blocked in the classical sense. “Lock” is a freely defined term. This usually means that the CPU must wait for the hardware.

This type of I / O is purely CPU work and there are no context switches. This usually happens if the application is reading a file or socket more slowly than the data can be provided. Here, async IO does not help performance at all. I'm not even sure that it would be advisable to unlock the UI thread, since all tasks can be performed synchronously.

Or does it also save cpu?

This usually increases CPU usage in real workloads. This is because asynchronous equipment adds processing, distribution, and synchronization. In addition, we need to switch to kernel mode two times, and not once (first initiate IO, and then delete the notification about the end of the I / O).

Typical workloads are performed with <100% CPU. A production server s> 60% by the processor will bother me, since there is no error for errors. In such cases, the thread pool queues are almost always empty. Therefore, there is no saving in context switching caused by processing multiple I / O terminations on a single context switch.

In this case, the CPU usage usually increases (slightly), unless the machine is very high when the CPU loads, and work queues can often send a new item immediately.

On the async server, IO is mostly useful for saving threads. If you have enough available flows, you will receive zero or negative profits. In particular, any single IO will not be faster by one bit.

This means that it does not consume a processor.

It would be useless to leave the CPU unavailable while IO is running. For the kernel, IO is just a data structure. Despite the fact that no CPU is currently running.

An anonymous person said:

For IO-bound tasks, there can be no significant performance advantage for using individual threads only to wait for the result.

Clicking the same job on another thread, of course, does not help with bandwidth. This is an added job, not a job cut. This is a shell game. (And async IO does not use the thread while it is running, so this is all based on a false assumption.)

An easy way to convince yourself that async IO usually costs more CPU than synchronization. IO - start a simple test synchronization of ping / pong TCP and async. Synchronization is faster. This is a kind of artificial load, so this is just a hint at what is happening, not a comprehensive dimension.

+3
source share

I know how async works.

Not.

I know that when execution comes to a wait, it frees the thread

This is not true. When execution reaches a wait, the expected operand is evaluated and then checked to see if the operation is complete. If this is not so, then the rest of the method is signed as a continuation of the expected, and the task representing the operation of the current method is returned to the caller.

None of this means "freeing up the stream." Rather, control returns to the caller, and the caller continues to execute the current thread. Of course, if the current caller was the only one in this thread, then the thread is executed. But there is no requirement that the asynchronous method be the only call in the stream!

after I / O is completed

The expected need should not be an I / O operation, but let it be.

it retrieves the thread from threadpool and runs the remaining code.

Not. He plans to run the remaining code in the correct context. This context may be a stream of flow. This may be a user interface thread. This may be the current thread. It can be any number of things.

Should we use async methods for a very fast I / O method like cache read / write method?

Awaiting assessment. If the expected person knows that he can complete the operation within a reasonable period of time, then she completely within the limits of her rights to perform the operation and return the completed task. In this case, there is no penalty; you just check the boolean to see if the task is completed.

Will they lead to an unnecessary context switch.

Not necessary.

If we use the synchronization method, execution will end in the same thread, and the context switch may not happen.

I am confused why you think the context switch happens during an I / O operation. I / O operations are performed on equipment below the OS thread level. There is no thread serving IO tasks.

Does Async-wait only consume memory (by creating smaller threads)

The expectation is to (1) make more efficient use of expensive workflows, allowing workflows to become more asynchronous and thereby free up workflows while waiting for high latency results, and (2) make the source code for asynchronous workflows similar to the source code for synchronous workflows processes.

As far as I know, in the case of IO synchronization, while IO is taking place, the thread goes into sleep mode. This means that it does not consume a processor. Is this understanding correct?

Of course, but you have it completely back. YOU WANT TO DEMAND A CPU . You want to constantly consume as much CPU as possible! The CPU does the work on behalf of the user, and if it is idle, then it does not do its work as fast as it could. Do not hire a worker, and then pay them to sleep! Hire a worker, and as soon as they are locked in a task with a high delay, put them to work, doing something else so that the processor remains as hot as possible. The owner of this machine paid good money for this processor; he must work 100% all the time for the work to be done!

So, back to your main question:

Does expect async increase context switch

I know a great way to find out. Write a program using wait, write another, run both of them and measure the number of context switches per second. Then you will find out.

But I do not understand why context variables per second are relevant metrics. Let's look at two banks with a large number of customers and many employees. At Bank No. 1, employees work on one task until it is completed; they never switch context. If an employee is blocked, waiting for a result from another, they go to sleep. In Bank No. 2, employees switch from one task to another when they are blocked, and constantly serve customer requests. Which bank, in your opinion, serves customers faster?

+20
source share

All Articles