Handling dropped TCP packets in C #

I send a large amount of data in one pass between the client and server written in C #. It works fine when I start the client and server on my local computer, but when I put the server on a remote computer on the Internet, it seems to delete the data.

I send 20,000 lines using socket.Send () method and get them using a loop that socket.Receive () does. Each line is separated by unique characters, which I use to count the number received (this is, if you want, a protocol). The protocol is proven that even with fragmented messages, each line is correctly counted. On my local machine, I get everything 20,000, via the Internet I get something between 17000-20000. It seems like a slower connection that the remote computer has. To add to the confusion, enabling Wireshark reduces the number of discarded messages.

First of all, what causes this? Is this a TCP / IP problem or is there something wrong with my code?

Secondly, how can I get around this? Getting all lines of 20,000 is vital.

Socket Receive Code:

private static readonly Encoding encoding = new ASCIIEncoding(); ///... while (socket.Connected) { byte[] recvBuffer = new byte[1024]; int bytesRead = 0; try { bytesRead = socket.Receive(recvBuffer); } catch (SocketException e) { if (! socket.Connected) { return; } } string input = encoding.GetString(recvBuffer, 0, bytesRead); CountStringsIn(input); } 

Socket Send Code:

 private static readonly Encoding encoding = new ASCIIEncoding(); //... socket.Send(encoding.GetBytes(string)); 
+6
c # sockets tcp
source share
4 answers

If you drop packets, you will see a delay in transmission, because it must retransmit dropped packets. This can be very significant, although there is a TCP option called selective acknowledgment, which, if supported by both parties, will only re-send packets that have been deleted, and not every packet since the reset. There is no way to control this in your code. By default, you can always assume that every packet is delivered for TCP, and if there is some reason why it cannot deliver every packet in order, the connection will be reduced either by timeout or by one end of the connection sending RST.

What you see is most likely the result of the Nagle algorithm. What it does, instead of sending every bit of data as it is published, sends one byte, and then waits for confirmation from the other side. While it waits, it combines all the other data that you want to send, and combines them into one large packet and then sends. Since the maximum size for TCP is 65 thousand, it can combine quite a lot of data into one packet, although it is extremely unlikely that this will happen, in particular, since the default buffer size for winsock is about 10 thousand or so (I forgot the exact amount). In addition, if the maximum size of the receiver window is less than 65 thousand, it will send as much as the last allowed window size of the receiver. Window size also affects the Nagle algorithm, as well as how much data it can aggregate before it is sent, since it cannot send more than the window size.

The reason you see this is because on the Internet, unlike your network, the first ack needs more time to return, so the Naggle algorithm combines more of your data into one package. Locally, the return is effective instantly, so it allows you to send your data as fast as you can send them to the socket. You can disable the client-side Naggle algorithm using SetSockOpt (winsock) or Socket.SetSocketOption (.Net), but I highly recommend that you DO NOT disable Naggling on the socket if you are 100% sure that you know what you are doing. This is there for a very good reason.

+3
source share

There is something wrong with your code if you think that the number of Receive calls has completed: you think that you will see that the number of Receive calls ends when you make Send calls.

TCP is a stream-based protocol โ€” you donโ€™t have to worry about individual packets or reads; you should be interested in reading the data, expecting that sometimes you will not receive the whole message in one packet, and sometimes you can receive more than one message in one reading. (A single read may also not correspond to a single packet.)

You must either prefix each method with its length before sending, or have a distinction between messages.

+4
source share

This is definitely not a TCP error. TCP guarantees the delivery order exactly once.

What lines are missing? I would put it last; try flushing from the sending end.

In addition, your โ€œprotocolโ€ here (I use the application layer protocol that you invent) is missing: you should consider sending # objects and / or their lengths so that the recipient knows when he really did receiving them.

+3
source share

How long does each line last? If they are not equal to 1024 bytes, they will be combined by the remote TCP / IP stack into one large stream, which you read in the receive block in large blocks.

For example, using three calls to send "A", "B" and "C" will most likely go to your remote client as "ABC" (since either the remote stack or your own stack will buffer bytes until they are read) . If you need every line that will not be merged with other lines, look at adding a โ€œprotocolโ€ with an identifier to show the beginning and end of each line, or, alternatively, configure a socket to avoid buffering and combining packets.

+1
source share

All Articles