Why use asynchronous requests instead of using a larger thread?

During Techdays here in the Netherlands, Steve Sanderson made a presentation on C # 5, ASP.NET MVC 4, and Asynchronous Web.

He explained that when requests take a long time, all threads from threadpool become busy and new requests must wait. The server cannot handle the load, and everything slows down.

He then showed how using async webrequests improves productivity because work is then delegated to another thread, and threadpool can respond quickly to new incoming requests. He even demonstrated this and showed that 50 simultaneous requests at first took 50 * 1 s, but with asynchronous behavior in place of only 1.2 s.

But after that I had some questions.

  • Why can't we use more threadpool? Doesn't use async / await to trigger another thread more slowly than just increasing threadpool from the start? Doesn't it look like the server we're running on is suddenly getting more threads or something else?

  • The request from the user is still waiting for the completion of the asynchronous stream. If a thread from the pool does something else, how does the "UI" thread work? Steve mentioned something about a "smart core that knows when something will end." How it works?

+62
c # asynchronous async-ctp
Feb 26 2018-12-12T00:
source share
3 answers

This is a very good question, and understanding this is key to understanding why asynchronous IO is so important. The reason the new async / await function was added in C # 5.0 is to make writing asynchronous code easier. Support for asynchronous processing on the server is not new, but exists with ASP.NET 2.0.

As Steve showed you, with synchronous processing, each request in ASP.NET (and WCF) takes one thread from the thread pool. The problem he demonstrated is a well-known problem called the thread pool puzzle . If you perform synchronous I / O on your server, the thread pool thread will remain blocked (doing nothing) for the duration of the I / O. Since the number of threads in the thread pool is limited, this can lead to a situation when all threads in the thread pool are blocked waiting for I / O and requests are queued, which increases the response time. Since all threads are waiting for I / O to complete, you will see that the processor is close to 0% (even if the response time passes through the roof).

What are you asking ( Why can't we just use a longer stream? ) Is a very good question. In fact, this is how most people solve the problem of thread pool starvation so far: just more threads in the thread pool. Some Microsoft docs even point out that as a correction to a situation where thread pool hunger could occur. This is an acceptable solution, and before C # 5.0, it was much easier to do this than rewriting code completely asynchronous.

There are several problems with the approach:

  • There is no value that works in all situations : the number of thread stream threads that you need depends linearly on the I / O duration and load on your server. Unfortunately, I / O latency is mostly unpredictable. Here's an example: Let's say you make HTTP requests to a third-party web service in your ASP.NET application, which takes about 2 seconds to complete. You are facing thread pool starvation, so you decide to increase the thread pool size, say, 200 threads, and then it starts working fine again. The problem is that perhaps next week the web service will have technical problems that increase the response time to 10 seconds. All of the sudden starvation of the thread pool is returned because the threads are blocked 5 times longer, so now you need to increase the number 5 times to 1000 threads.

  • Scalability and performance . The second problem is that if you do this, you will still use one thread for each request. Topics are an expensive resource. Each managed thread in .NET requires 1 MB of memory per stack. For a web page that creates IO, which lasts 5 seconds and with a load of 500 requests per second, you need 2,500 threads in the thread pool, which means 2.5 GB of memory for thread stacks that won't do anything. Then you will have a context switching problem that will affect the performance of your computer (affecting all the services on the computer, not just your web application). Although Windows does a pretty good job of ignoring pending threads, it is not designed to handle so many threads. Remember that maximum efficiency is achieved when the number of threads equal to the number of logical processors on the machine (usually no more than 16).

Thus, increasing the size of the thread pool is a solution, and people have been doing it for a decade (even in Microsoft's own products), it is less scalable and efficient in terms of memory and processor use, and you are always at the mercy of a sudden increase in input delay - an output that can cause hunger. Prior to C # 5.0, the complexity of asynchronous code was not a big problem for many people. async / await changes everything just like now, you can take advantage of the scalability of asynchronous I / O and write simple code at the same time.

More details: http://msdn.microsoft.com/en-us/library/ff647787.aspx "Use asynchronous calls to call web services or remote objects when it is possible to perform additional parallel processing while continuing to call the web service. If possible avoid synchronous (blocking) calls to web services because outgoing calls to web services are made using threads from the ASP.NET thread pool. Blocking calls reduces the number of threads available to handle other incoming requests.

+53
Feb 27 2018-12-12T00:
source share
  • Async / await is not thread based; It is based on asynchronous processing. When you do an asynchronous wait in ASP.NET, the request flow returns to the thread pool, so there are no threads serving this request until the async operation completes. Since request overhead is lower than overhead, this means that async / await can scale better than a thread pool.
  • The request has a number of outstanding asynchronous operations. This account is managed by an ASP.NET SynchronizationContext implementation. You can learn more about SynchronizationContext in my MSDN article - it describes how ASP.NET SynchronizationContext works and how await uses SynchronizationContext .

ASP.NET asynchronous processing was possible before async / await - you could use asynchronous pages and use EAP components such as WebClient (Event-based asynchronous programming is a style of asynchronous programming based on SynchronizationContext ). Async / await also uses SynchronizationContext , but has a much simpler syntax.

+29
Feb 26 2018-12-12T00:
source share

Imagine that threadpool is the set of workers that you used to do your job. Your employees quickly follow the cpu instructions for your code.

Now your job depends on the work of another slow guy; slow guy - disk or network . For example, your work may consist of two parts, one of which must be performed before the slow guy, and one part that must be performed after the slow guy.

How would you advise your employees to do their job? Will you tell each employee: β€œDo this first part, then wait until this slow guy is done, and then do the second part”? Will you increase the number of your employees because they all seem to be waiting for this slow guy and you cannot satisfy new customers? No!

Instead, you ask each employee to do the first part and ask the slow guy to come back and send a message to the queue when this is done. You would tell each employee (or perhaps a selected subset of employees) to look for the messages made in the queue and do the second part of the work.

the intellectual core that you are talking about above is the ability of operating systems to support this queue for messages about shutting down a slow disk and network.

+6
Aug 11 '14 at 19:49
source share



All Articles