The most efficient way to handle client connections (socket programming)

In every tutorial and examples I've seen on the Internet for Linux / Unix tutorials, the server-side code always includes an infinite loop that checks the connection to the client every time. Example:

http://www.thegeekstuff.com/2011/12/c-socket-programming/

http://tldp.org/LDP/LG/issue74/tougher.html#3.2

Is there a more efficient way to structure server-side code so that it does not include an infinite loop or does not encode an infinite loop so that it takes up less system resource?

+4
source share
7 answers

the endless loop in these examples is already efficient. the accept() call is a blocking call: the function does not return until the client connects to the server. code execution for the thread calling the accept() function stops and does not require any processing power.

think accept() as a call to join() or as an expectation on a mutex / lock / semaphore.

Of course, there are many other ways to handle an incoming connection, but these other methods deal with the blocking nature of accept() . This function is difficult to cancel, so there are non-blocking alternatives that will allow the server to perform other actions while waiting for an incoming connection. one of these alternatives uses select() . other alternatives are less portable because they are associated with low level operating system calls to signal a connection through a callback function, event, or any other asynchronous mechanism handled by the operating system ...

+6
source

For C ++, you can learn boost.asio . You can also look, for example. asynchronous I / O functions . There is also SIGIO .

Of course, even when using these asynchronous methods, your main program should still be in a loop or the program will exit.

+1
source

An infinite loop to maintain the state of the server, so when the client connection is accepted, the server will not immediately leave, instead, it will return to listening to another client connection.

The call to listen () is blocking, i.e. he waits until he receives the data. This is a very efficient way, using zero system resources (until the connection is made, of course), using network operating system drivers that trigger an event (or hardware interrupt) that wakes up the listening stream.

+1
source

When you implement a server that listens for possibly endless connections, there is no way around some kind of endless loops. This is usually not a problem, because when your socket is not marked non-blocking, the accept() call will block until a new connection arrives. Because of this lock, no system resources are lost.

Other libraries, which are an event-based system, are ultimately implemented as described above.

0
source

Here is a good overview of which methods are available - Problem C10K .

0
source

In addition to what has already been published, it's pretty easy to see what happens with the debugger. You will be able to execute a single execution until the accept () line is executed, on which the "sigle-step" highlighting disappears and the application starts - the next line is not reached. If you put a point of bread on the next line, it does not work until the client connects.

0
source

We need to follow the best practice of writing a client-server program. The best guide that I can recommend for you right now is Problem C10K . In this case, we need specific things. We can use choice or poll or epoll. Each has its own advantages and disadvantages.

If you use code using the latest kernel version, I would recommend going for epoll. Click to see an example program to understand epoll .

If you use select, poll, epoll, then you will be blocked until you receive an event / trigger so that your server does not work in an infinite loop, consuming your system time.

In my personal experience, I think that epoll is the best way to go further, because I noticed that the threshold of my server machine with 80K ACTIVE connection was very low when compared, it will select and poll. The average load on my server machine was only 3.2 with an active 80k connection :)

When testing with a survey, I find that the average server load increased to 7.8 when reaching an active client connection of 30,000: (.

0
source

All Articles