I have an application that sends serializable objects of different sizes over a socket connection, and I would like it to be as scalable as possible. There may also be tens to hundreds of compounds.
- NetworkStream comes from TcpClient, which constantly listens for incoming messages.
- I do not want to block the stream with standard NetworkStream.Read (). It needs to be scaled. I assume that Read () is blocking because this is pretty standard behavior for such a class of classes, as well as the ReadTimeout property in the class.
- I'm not sure if BinaryFormatter just uses Read (), or if it does some Async stuff for me under the hood. My guess is no.
- TcpClient needs to receive a message, read it to the end, and then return to listening to messages.
So, it seems that there are too many ways to fool this cat, and I'm not sure what will actually be the most effective. I have:
Just use BinaryFormatter to read NetworkStream?
var netStream = client.GetStream(); var formatter = new BinaryFormatter(); var obj = formatter.Deserialize(netStream);
OR Do some magic with new asynchronous / pending things:
using(var ms = new MemoryStream()) { var netStream = client.GetStream(); var buffer = new byte[1028]; int bytesRead; while((bytesRead = await netStream.ReadAsync(buffer, 0, buffer.length)) > 0) { ms.Write(buffer, 0, buffer.Length); } var formatter = new BinaryFormatter(); var obj = formatter.Deserialize(ms); }
OR As in the previous example, only using the new CopyToAsync method:
using(var ms = new MemoryStream()) { var netStream = client.GetStream(); await netStream.CopyToAsync(ms);
OR something else?
I am looking for an answer that provides maximum scalability / efficiency.
[Note: above is all PSUEDO code given as examples]
source share