What are the benefits of moving from blocking to non-blocking sockets?
Increase speed, availability and bandwidth (in my experience). I had an IndySockets client that received about 15 requests per second, and when I switched directly to asynchronous sockets, the throughput increased to about 90 requests per second (on one computer). In a separate test test on a server in a data center with a 30 Mbps connection, I was able to receive more than 300 requests per second.
Can I detect client disconnects (not elegant)?
This is one thing I haven't had to try yet, since all of my code was client-side.
Which component kit has the best product? For the best product, I mean: fast, good support, good tools and ease of use.
You can create your own socket client in a couple of days, and it can be very reliable and fast ... much faster than most of the things I've seen off the shelf. Feel free to take a look at my asynchronous socket client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
Update:
(According to Mikey comments)
I ask you to get a general technical explanation of how NBS increases throughput rather than a properly designed BS server.
Let's take a server with a high load as an example: suppose your server should process 1000 connections at any given time, with blocking sockets that you would need to create 1000 threads, and even if they are mostly idle, the processor will still spend a lot time to switch context. As the number of clients increases, you will have to increase the number of threads to keep up, and the processor will inevitably increase context switching. For each connection you establish with a blocking socket, you will incur the overhead of creating a new stream, and ultimately you will incur the overhead of cleaning after the stream. Of course, the first thing that comes to mind: why not use ThreadPool, you can reuse threads and reduce the overhead of creating / cleaning threads.
This is how it is done on Windows (hence the .NET connection): you probably could, but the first thing you notice in .NET ThreadPool is that it has two types of threads, and this is not a coincidence: user threads and port threads complete I / O. Asynchronous sockets use I / O completion ports, which "allow a single thread to perform simultaneous I / O on different handles or even simultaneous read and write operations of the same descriptor." ( 1 ) I / O completion port streams are specifically designed to handle I / O in a much more efficient way than you could ever have achieved if you had used custom threads in ThreadPool if you had written your own kernel-mode driver .
"The completion port uses some special voodoo to make sure that only a certain number of threads can start at once - if one thread is blocked in kernel mode, it will automatically start another one." ( 2 )
There are other advantages: "In addition to the non-blocking advantage of I / O with overlapping sockets, another advantage is better performance, because you save a copy of the buffer between the TCP stack buffer and the user buffer for each I / O call." ( 3 )