Speaking with my experience with large IRC servers, we used the select () and poll () functions (because epoll () / kqueue () was not available). About 700 simultaneous clients, the server will use 100% of the CPU (the irc server was not multithreaded). However, interestingly, the server will work well. About 4,000 clients, the server will begin to lag.
The reason for this was that about 700 users, when we return to select (), one client will be available for processing. Scanning for () to figure out which client it will be will consume most of the processor. As we got more customers, we would start to get more and more customers who need to be processed in each select () call, so that we become more efficient.
Moving to epoll () / kqueue (), similar specialized machines will trivially deal with 10,000 clients, some (more powerful machines, but still machines that will be considered tiny by today's standards), spent 30,000 clients without breaking a sweat .
The experiments I saw with SIGIO seem to show that this works well for applications where latency is extremely important when there are only a few active clients that do very little individual work.
I would recommend using epoll () / kqueue () over select () / poll () in almost any situation. I have not experimented with sharing clients between threads. Honestly, I never found a service that needed to be better optimized for handling the client interface to justify experimenting with threads.
Perry lorier
source share