What resources block threads

One of the main goals of writing code in an asynchronous programming model (more specifically, using callbacks instead of blocking a thread) is to minimize the number of blocking threads in the system.

To start threads, this goal is obvious due to context switches and synchronization costs.

But what about blocked threads? why is it so important to reduce their number?

For example, when waiting for a response from a web server, the thread is blocked and does not accept any processor time and does not participate in any context switch.

So my question is: besides RAM (about 1 MB per thread?) What other resources block the thread?

And another more subjective question: In what cases will it really cost to justify the complexity of writing asynchronous code (for example, dividing your good coherent method into many beginXXX and EndXXX methods, and moving parameters and local variables to class fields).

UPDATE - additional reasons why I did not mention or did not attach enough weight:

  • More threads means more blocking on shared resources

  • More threads mean more creation and disposal of costly threads

  • The system can definitely work with threads / RAM and then stop servicing clients (in a web server scenario, this can actually lead to a decrease in service)

+6
multithreading c # asynchronous
source share
3 answers

So my question is: besides RAM (about 1 MB per thread?) What other resources block the thread?

This is one of the biggest. However, there is a reason ThreadPool in .NET allows so many threads per core - in 3.5, by default there were 250 work threads per core in the system . (In .NET 4, it depends on system information, such as virtual address size, platform, etc. - now there is no fixed default value.) Topics, especially blocked threads, are really not that expensive ...

However, I would say, from the point of view of code management, it is worth reducing the number of blocked threads. Each blocked thread is an operation that must at some point return and unlock. With many of these tools, you have a fairly sophisticated set of management code. Storing this number will help simplify the code base and provide more convenient maintenance.

And another more subjective question: in what cases will it really cost to justify the complexity of writing asynchronous code (for example, dividing your good coherent method into many beginXXX and EndXXX methods, as well as moving parameters and local variables should be class fields) .

Right now, this is often a pain. It depends on the scenario. However, the Task<T> class in .NET 4 greatly improves it for many scenarios. Using TPL, it is much less painful than APM (BeginXXX / EndXXX) or even EAP.

This is why language developers put so much effort into improving this situation in the future . Their goal is to make asynchronous code much easier to write so that it can be used more often.

+6
source share

In addition, from any resources, a blocked thread can hold a lock; the size of the thread pool is also taken into account. If you have reached the maximum thread pool size (if I remember correctly that .NET 4 is the maximum number of threads equal to 100 per processor), you simply cannot get anything else for the thread pool until at least one thread gets freed.

0
source share

I would like to indicate that a value of 1 MB for the stack memory (or 256 KB, or whatever it set) is a reserve; while it removes from the available address space, the actual memory is only fixed as needed.

On the other hand, the presence of a very large number of threads is due to the fact that he deviates a bit from the task scheduler, since he must track them (which became executable from the last tick, etc.).

0
source share

All Articles