A very strange problem with sending data through Sockets in C #

I am sentencing to this long message. I did this as little as possible while still maintaining the problem.

Okay, this is driving me crazy. I have a client and server program, both in C #. The server sends data to the client through Socket.Send (). The client receives data through Socket.BeginReceive and Socket.Receive. My pseudo-protocol looks like this: the server sends a two-byte (short) value indicating the length of the actual data that immediately follows the actual data. The client reads the first two bytes asynchronously, converts the bytes to short, and immediately reads that many bytes from the socket are synchronous.

Now it works fine for one cycle every few seconds or so, but when I increase the speed, everything becomes weird. It seems that the client will accidentally read the actual data when it tries to read from a double-byte length. Then he tries to convert these arbitrary two bytes to short, which leads to a completely wrong value, which leads to a failure. The following code is from my program, but trimmed to show only important lines.

Server method for sending data:

private static object myLock = new object(); private static bool sendData(Socket sock, String prefix, byte[] data) { lock(myLock){ try { // prefix is always a 4-bytes string // encoder is an ASCIIEncoding object byte[] prefixBytes = encoder.GetBytes(prefix); short length = (short)(prefixBytes.Length + data.Length); sock.Send(BitConverter.GetBytes(length)); sock.Send(prefixBytes); sock.Send(data); return true; } catch(Exception e){/*blah blah blah*/} } } 

Client method of receiving data:

 private static object myLock = new object(); private void receiveData(IAsyncResult result) { lock(myLock){ byte[] buffer = new byte[1024]; Socket sock = result.AsyncState as Socket; try { sock.EndReceive(result); short n = BitConverter.ToInt16(smallBuffer, 0); // smallBuffer is a 2-byte array // Receive n bytes sock.Receive(buffer, n, SocketFlags.None); // Determine the prefix. encoder is an ASCIIEncoding object String prefix = encoder.GetString(buffer, 0, 4); // Code to process the data goes here sock.BeginReceive(smallBuffer, 0, 2, SocketFlags.None, receiveData, sock); } catch(Exception e){/*blah blah blah*/} } } 

Server-side code to reliably recreate the problem:

 byte[] b = new byte[1020]; // arbitrary length for (int i = 0; i < b.Length; i++) b[i] = 7; // arbitrary value of 7 while (true) { sendData(socket, "PRFX", b); // socket is a Socket connected to a client running the same code as above // "PRFX" is an arbitrary 4-character string that will be sent } 

By looking at the code above, you can determine that the server will forever send the number 1024, the length of the general data, including the prefix, in the form of a short (0x400), and then "PRFX" in binary ASCII format, followed by bundle 7 (0x07). The client will read the first two bytes (0x400) forever, interpret them as 1024, store this value as n, and then read 1024 bytes from the stream.

This is really what it does for the first 40 or so iterations, but spontaneously, the client will read the first two bytes and interpret them as 1799, not 1024! 1799 in hexadecimal format - 0x0707, which is two consecutive 7th !!! This is data, not length! What happened to these two bytes? This happens with any value that I put in the byte array, I just chose 7 because it is easy to see the correlation with 1799.

If you are still reading this moment, I welcome your commitment.

Some important observations:

  • Reducing the length of b will increase the number of iterations before the problem occurs, but will not prevent the problem from occurring.
  • Adding a significant delay between each iteration of the loop can prevent the problem from occurring.
  • This is NOT performed when using both the client and the server on the same host and connecting through a loopback address.

As already mentioned, it drives me crazy! I can always solve my programming problems, but this one completely surpassed me. Therefore, I here plead for any advice or knowledge on this matter.

Thanks.

+6
arrays c # byte sockets
source share
5 answers

I suspect that you are assuming that the entire message is delivered in one call before receiveData . This is generally not the case. Fragmentation, delays, etc. They may ultimately mean that the data comes in dribbles, so you can get your receiveData function if it only exists, for example. Done 900 bytes. Your call sock.Receive(buffer, n, SocketFlags.None); says: "Give me data to the buffer, up to n bytes" - you can get less, and the number of bytes actually read is returned by Receive .

This explains why decreasing b , adding more delay or using the same host seems to “fix” the problem - the chances are that the whole message will arrive at once using these methods (less b , less data, adding delay means that there is less common data in the channel, there is no local network for the local address).

To find out if this is a problem, write down the Receive return value. Sometimes my diagnosis will be less than 1024. To fix this, you need to either wait until the buffer of the LAN stack has all your data (not perfect), or simply receive the data as it arrives, and save it locally until it is ready the whole message and will not process it.

+3
source share

It seems to me that the main problem is that it does not check the EndReceive result, which is the number of bytes read. If the connector is open, it can be all > 0 and <= maximum value you specify. It is unsafe to assume that since you requested 2 bytes, 2 bytes were read. The same is true when reading evidence.

You should always loop, accumulate the amount of expected data (and reduce the remaining amount and increase the offset accordingly).

The sync / async combination won't make it easy: £

+2
source share
 sock.Receive(buffer, n, SocketFlags.None); 

You do not check the return value of the function. The socket will revert to "n" bytes, but will return only what is available. Most likely you are not getting the full data with this reading. Thus, when you perform the next read of the socket, you are actually getting the first two bytes of the rest of the data, not the length of the next transfer.

+2
source share

When using sockets, you should expect that a socket can transmit fewer bytes than you expect. You have to loop around on the .Receive method to get the rest of the bytes.

This is also true when you send bytes through a socket. You have to check how many bytes were sent, and the cycle in Send until all bytes are sent.

This behavior is due to the fact that network layers divide messages into several packets. If your posts are short, then you are unlikely to come across this. But you should always encode it.

With a large number of bytes per buffer, you will most likely see a sender message on several packets. Each read will receive one packet, which is only part of your message. But small buffers can also be split.

+1
source share

I assume that smallBuffer declared outside the receive method and is reused from multiple threads. In other words, this is not a safe thread.

Writing good socket code is difficult. Since you are writing both client and server, you can look at the Windows Communication Foundation (WCF), which allows you to send and receive all objects with much less problems.

0
source share

All Articles