Question about listening and lag for sockets

I am writing a C # application that should handle incoming connections, and I have never done server-side programming before. This leads me to the following questions:

  • Pros and cons of a big lag / lag? Why shouldn't we set the backlog to a huge amount?
  • If I call Socket.Listen (10), after 10 Accept () s do I need to call Listen () again? Or do I need to call Listen () after each Accept ()?
  • If I set my backlog to 0, and hypothetically two people want to connect to my server at the same time, what will happen? (I call Socket.Select in a loop and checking the readability of the socket to listen on, after I process the first connection, the second connection will succeed in the next iteration if I call Listen () again?)

Thanks in advance.

+6
c # server-side sockets
source share
3 answers

The delay in listening, as Peter said , is the queue that the operating system uses to store connections that were accepted by the TCP stack, but not yet, by your program. Conceptually, when a client connects it, placed in this queue until your Accept() code removes it and passes it into your program.

So backlog is a setting that can be used to help your server handle peaks while trying to connect. Please note that this is due to peaks in parallel connection attempts and is in no way related to the maximum number of parallel connections that your server can support. For example, if you have a server that receives 10 new connections per second, it is unlikely that setting the listening delay will have any effect, even if these connections are long-lived, and your server supports 10,000 concurrent connections (assuming your server is not the maximum of the central processor serving existing connections!). However, if the server occasionally experiences short periods when it accepts 1000 new connections per second, you can probably prevent some connections from being rejected by setting the listening lag to provide a longer queue and therefore give your server more time to call Accept() for each connection.

As for the pluses and minuses, the pluses are that you can handle peaks when trying to parallelize better, and the corresponding con is that the operating system should allocate more space for the replacement queue to listen to, because it is larger. Thus, productivity and resources are traded.

Personally, I make listen to the backlog what can be edited externally through the configuration file.

How and when you call listening and accept depends on the style of socket code you use. With the synchronous code, you call Listen() once with a value, say 10, for your listening, and then you call the Accept() loop. The call to listen establishes an endpoint from which your clients can connect and conceptually creates a queue for storage in anticipation of the specified size. The Accept() call removes the pending connection from the listening listen queue, sets the socket to use the application, and passes it to your code as the newly established connection. If the time your code spent on calling Accept() processes a new connection, and the round loop for calling Accept() again larger than the gap between simultaneous connection attempts, then you will begin to accumulate records in the listening delay queue.

With asynchronous sockets, this can be a little different, if you use async accepts, you will listen once, as before, and then publish several (again configurable) asynchronous connections. As each of them completes, you process a new connection and publish a new asynchronous reception. Thus, you have a waiting list for the listen and a waiting receiving queue, and therefore you can receive connections faster (which accepts asynchronous calls in thread pool threads more, so you do not have a single hard acceptance cycle). This is usually more scalable and gives you two points to configure for handling more concurrent connection attempts.

+13
source share

What is lag is queuing clients who are trying to connect to the server but who are not yet processed.

This applies to the time between when the client really connects to the server and the time when you are an Accept or EndAccept client.

If accepting a client takes a long time, it is possible that the backup will be full and new client connections will be rejected until you have time to process the clients from the queue.

Regarding your questions:

  • I have no information about this. If the default number does not cause any problems (without rejected client connections), leave it at the default. If you see a lot of mistakes when new customers want to connect, increase the number. However, this is probably due to the fact that you are taking too much time when accepting a new client. You must solve this problem before you increase the backlog;

  • No, this is handled by the system. The normal customer reception mechanism will take care of this;

  • See my previous explanations.

+2
source share

Try this program and you will see what is useful for lagging.

 using System; using System.Net; using System.Net.Sockets; /* This program creates TCP server socket. Then a large number of clients tries to connect it. Server counts connected clients. The number of successfully connected clients depends on the BACKLOG_SIZE parameter. */ namespace BacklogTest { class Program { private const int BACKLOG_SIZE = 0; //<<< Change this to 10, 20 ... 100 and see what happens!!!! private const int PORT = 12345; private const int maxClients = 100; private static Socket serverSocket; private static int clientCounter = 0; private static void AcceptCallback(IAsyncResult ar) { // Get the socket that handles the client request Socket listener = (Socket) ar.AsyncState; listener.EndAccept(ar); ++clientCounter; Console.WriteLine("Connected clients count: " + clientCounter.ToString() + " of " + maxClients.ToString()); // do other some work for (int i = 0; i < 100000; ++i) { } listener.BeginAccept(AcceptCallback, listener); } private static void StartServer() { // Establish the locel endpoint for the socket IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Any, PORT); // Create a TCP/IP socket serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Bind the socket to the local endpoint and listen serverSocket.Bind(localEndPoint); serverSocket.Listen(BACKLOG_SIZE); serverSocket.BeginAccept(AcceptCallback, serverSocket); } static void Main(string[] args) { StartServer(); // Clients connect to the server. for (int i = 0; i < 100; ++i) { IPAddress ipAddress = IPAddress.Parse("127.0.0.1"); IPEndPoint remoteEP = new IPEndPoint(ipAddress, PORT); // Create a TCP/IP socket and connect to the server Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); client.BeginConnect(remoteEP, null, null); } Console.ReadKey(); } } } 
+2
source share

All Articles