As already noted, this is not immediately available, as the code intentionally tries to make a single pass through the data (especially IEnumerable<T> , etc.). However, depending on your data, a moderate number of copies may already be running, so you can assume that the sub-messages also have a length prefix, so juggling may be required. This juggling can be significantly reduced by using the intragroup subformat inside the message, since groups only allow you to create back and forth without backlinks.
So, is it possible to add a method that estimates the size of a message without serialization in a stream?
Evaluation is approaching worthless; since there is no terminator, this must be accurate. Ultimately, the dimensions are a little difficult to predict without even having done so. In version v1, there was some code for predicting the size, but currently a one-pass code is preferable, and in most cases the buffer overhead is nominal (there is a code for reusing internal buffers so that it does not spend all the time allocating buffers for small messages) .
If your message is internally redirected (grouped), then the cheat can be serialized for a fake stream that measures but leaves all the data; however, you will finish serializing twice.
Re:
and requires the message length prefix to be Varint32, we cannot use the SerializeWithLengthPrefix method
I'm not quite sure that I see a relationship there - it allows you to use a number of formats here, etc .; perhaps if you can be more specific?
Re-copying the data - the idea I was playing with - is to use sub-standard forms to prefix the length. For example, it may be that in most cases 5 bytes are many, so instead of juggling it can leave 5 bytes and then simply overwrite without condensation (since the octet 10000000 still means βzeroβ and continue βeven if it is redundant). This you still need to buffer (to allow backfilling), but did not require or move data.
The last simple idea would be simple: serialize to FileStream ; then write the file length and file data. Obviously, it is trading memory for I / O.
Marc gravell
source share