Are socket. * Threaded asynchronous methods?

I'm currently trying to figure out what is the best way to minimize the number of threads that I use on the main TCP server in order to maximize performance.

As I read a lot recently with the new C # 5.0 asynchronous features, asynchronous does not necessarily mean multithreading. This can mean splitting into smaller pieces of objects with a final state, and then processed together with other operations by alternating. However, I do not see how this can be done on the network, since I basically "wait" for input (from the client).

Therefore, I would not use ReceiveAsync () for all of my sockets, it will just create and end threads continuously (assuming it creates threads).

Therefore, my question is more or less: what architecture can the main server take without one “thread” for each connection?

A side question for bonus cool points: why do you have a lot of threads, given that the number of threads exceeding the number of processor cores just makes the machine "fake" multithreading, like any other asynchronous method?

+7
source share
4 answers

It seems that *Async methods use IOCP (looking through code with Reflector).

+3
source

No, you will not necessarily create threads. There are two possible ways to asynchronize without tuning and breaking threads:

  • You may have a "small" number of long-lived threads, and they sleep when there is no work (this means that the OS never plans to execute them, so the resource will be minimal). Then, when the work comes (i.e.Async Method), wake one of them and tell her what needs to be done. Glad to meet you, managed thread pool .
  • On Windows, the most efficient mechanism for async is the I / O completion ports , which synchronizes access to I / O operations and allows a number of threads to manage massive workloads.

Regarding multiple threads:

Having multiple threads is not bad for performance if

  • the number of threads is not excessive.
  • threads do not oversaturate the processor.

If the number of threads is excessive, then obviously we impose on the OS the need to monitor and plan all these threads that use global resources and slow down.

If threads are tied to a processor, then maintaining OS equity will require much more frequent context switches, and context switches may slow performance. In fact, with user-mode threads (which all scalable systems use — I think RDBMS), we make our lives more difficult so that we can avoid context switches.

Update:

I just found this question that supports a position that you cannot tell how many threads are too many in advance - there are too many unknown variables.

+8
source

John's answer is great. Regarding the “side question” ... See http://en.wikipedia.org/wiki/Amdahl%27s_law . Amdel’s law states that serial code quickly reduces the profits from parallel code. We also know that thread coordination (scheduling, context switching, etc.) is consistent, so at some point more threads means there are so many consecutive steps that you lose the benefits of parallelization and you have negative performance . This is complicated stuff. That's why it takes so much effort to let .NET manage the threads, while we define “tasks” for the structure to decide which thread to work with. A structure can switch between tasks much more efficiently than an OS can switch between threads, because the OS has many additional things that it needs to worry about.

+1
source

Asynchronous operation can be performed without a single-threaded connection or a thread pool with OS support for select or poll (and Windows supports this, and it opens through Socket.Select ). I'm not sure about windows performance, but this is a very common idiom elsewhere.

One thread is a “pump” that manages IO connections and monitors changes in threads, and then sends messages to / from other threads (presumably 0 ... n depending on the model). Approaches with 0 or 1 additional threads can fall into the Event Machine category, for example, twisted (Python) or POE (Perl). With> 1 threads, callers form an "implicit thread pool" (themselves) and basically just unload the IO lock.

There are also approaches such as Actors, Continuations, or Fiber exposed in the basic models of some languages ​​that change the way the main problem approaches — don’t wait, don’t react.

Happy coding.

0
source

All Articles