I have an ECHO server application based on TCPListener . It receives clients, reads data and returns the same data. I developed it using the async / wait approach using the XXXAsync methods provided by the framework.
I set up performance counters to measure the number and number of messages and bytes, as well as the number of connected sockets.
I created a test application that runs 1400 asynchronous TCPClient and sends a 1Kb message every 100-500 ms. Clients have a random start of waiting between 10-1000 ms at the beginning, so they do not try to connect everything at the same time. I work well, I see that 1400 is connected in PerfMonitor, sending messages at a good speed. I am running a client application from another computer. The server processor and memory usage is very small, it is an Intel Core i7 with 8 GB of RAM. The client seems more busy, this is i5 with 4 GB of RAM, but still not even 25%.
The problem is that I am running another client application. Connections are starting to fail at clients. I do not see a large number of messages per second (an increase of 20% more or less), but I see that the number of connected clients is about 1900-2100, and not expected to 2800. The performance decreases slightly, and the graph shows large changes between max messages and min per second than before.
However, CPU usage is not even 40%, and memory usage is still low. I tried to increase the number or pool of threads on both the client and server:
ThreadPool.SetMaxThreads(5000, 5000); ThreadPool.SetMinThreads(2000, 2000);
On the server, connections are accepted in a loop:
while(true) { var client = await _server.AcceptTcpClientAsync(); HandleClientAsync(client); }
The HandleClientAsync function returns a Task , but as you can see, the loop does not wait for processing, it just continues to accept another client. This processing function looks something like this:
public async Task HandleClientAsync(TcpClient client) { while(ws.Connected && !_cancellation.IsCancellationRequested) { var msg = await ReadMessageAsync(client); await WriteMessageAsync(client, msg); } }
These two functions only read and write to the stream asynchronously.
I saw that I can start TCPListener with the amount of backlog , but what is the default value?
Why could there be a reason why the application does not scale until it reaches the maximum processor?
What will be the approach and tools to figure out what the actual problem is?
UPDATE
I tried the Task.Yield and Task.Run approaches and they did not help.
This also happens with a server and client running locally on the same computer. Increasing the number of clients or messages per second actually reduces the throughput of the service. 600 clients sending a message every 100 ms generates more bandwidth than 1000 clients sending a message every 100 ms.
The exceptions that I see on the client when connecting more than 2000 clients are two. Since about 1500, I see exceptions at the beginning, but clients finally connect. With over 1500, I see a lot of connections / disconnections:
"Existing connection was forcibly closed by the remote host" (System.Net.Sockets.SocketException) A Fixed System.Net.Sockets.SocketException: "existing connection was forcibly closed by the remote host"
"Cannot write data to transport connection: existing connection was forcibly closed by the remote host." (System.IO.IOException) System.IO.IOException exception: "Failed to write data to the transport connection: the existing connection was forcibly closed by the remote host."
UPDATE 2
I created a simple simple project with server and client using async / wait , and it scales as expected.
In a project where I have a scalability problem, this is a WebSocket server , and even when it uses the same approach, something obviously causes competition. There is a console application in which the component is located , and a console application generates a load (although this requires at least Windows 8).
Please note that I am not asking for an answer to fix the problem directly, but for methods or approaches to find out what causes this statement.