I am developing a C # application using the server-client model, where the server sends a byte array with a bitmap on the client, the client loads it onto the screen, sends "OK" to the server and the server sends another image, etc.
The length of the image buffer depends on the level, usually from 60 to 90 kb, but I saw that it does not matter. If I put the client and server on the same computer using localhost, everything will be fine. The server starts SendSend, and the client does endReceive, and the entire buffer is sent.
However, now I am testing this on a wireless network, and the following happens:
- The server sends the image.
- The data_received callback function is called on the client, but only read 1460 bytes (MTU - why? Should it be not only in UDP?)
- The data_received callback function on the client is called again, now with the rest of the buffer (either 1000 bytes or 100 kilobytes) ...
This is always the case, the first packet with 1460 bytes is received, and then the second packet contains the rest.
I can get around this by combining both byte arrays, but this seems wrong. I don’t even know why this is happening. Is this some kind of network restriction? Why does C # not wait until all the data has been transferred? I mean, this is TCP, I don't need to worry about that, right?
Anyway, any help would be great!
Greetings
c # tcp mtu packets
Joao oliveira
source share