C # Async TCP sockets: buffer size and huge translations

When using a blocking TCP socket, I should not indicate the size of the buffer. For example:

using (var client = new TcpClient()) { client.Connect(ServerIp, ServerPort); using (reader = new BinaryReader(client.GetStream())) using (writer = new BinaryWriter(client.GetStream())) { var byteCount = reader.ReadInt32(); reader.ReadBytes(byteCount); } } 

Notice how the remote host could send any number of bytes.

However, when using asynchronous TCP sockets, I need to create a buffer and, therefore, specify the maximum size of the hard code:

  var buffer = new byte[BufferSize]; socket.BeginReceive(buffer, 0, buffer.Length, SocketFlags.None, callback, null); 

I could just set the buffer size, say, 1024 bytes. This will work if I need to get small chunks of data. But what if I need to get a 10 MB serialized object? I could set the buffer size to 10 * 1024 * 1024 ... but that would lose a constant 10 MB of RAM while the application was running. This is silly.

So my question is: How can I efficiently receive large chunks of data using asynchronous TCP sockets?

+4
source share
1 answer

Two examples are not equivalent - your lock code assumes that the remote end sends a 32-bit length of data to follow. If the same protocol is valid for async, just read that length (block or not), then allocate a buffer and initiate asynchronous I / O.

Change 0:

Let me also add that the allocation of buffers entered by the user, and especially network input, is a disaster receipt. The obvious problem is the denial of service attack, when the client requests a huge buffer and holds onto it - say, it sends data very slowly - and prevents other distributions and / or slows down the entire system.

The common wisdom here is to accept a fixed amount of data at a time and parse when you go. This, of course, affects the design of the protocol at the application level.

+5
source

All Articles