What is the fastest Serialization engine for C #?

This is for small payloads.

I am looking to reach 1,000,000,000 in 100 ms.

Standard BinaryFormatter is very slow. DataContractSerializer is slower than BinaryFormatter.

Protocol buffers ( http://code.google.com/p/protobuf-net/ ) seem slower than BinaryFormatter for small objects!

Are there any other Serialization a mechanisms that should look at hardcore coding or open source projects?

EDIT: I serialize in memory and then pass the payload through tcp in an asynchronous socket. The payloads generated in memory are small double arrays (from 10 to 500 points) with the identifier ulong.

+7
source share
9 answers

Your performance requirement limits the available serializers to 0. Custom BinaryWriter and BinaryReader will be faster than you could get.

+11
source

I would expect Protobuf-net to be faster even for small objects ... but you can try my protocol buffer port as well. I have not used the Marc port for a while - mine was faster when I last tested it, but I know that it has been completely rewritten since :)

I doubt that you will achieve serialization of a billion elements in 100 ms, no matter what you do, though ... I think this is just an unreasonable expectation, especially if it is being written to disk. (Obviously, if you just overwrite the same bit of memory, you will get much better performance than serializing to disk, but I doubt that what you are trying to do is really.)

If you can give us more context, we can help more. For example, is it possible to spread the load on several machines? (Several serial cores with the same I / O device are unlikely to help, since I would not expect this to be a CPU-bound operation if it is written to disk or to the network.)

EDIT: Suppose each object has 10 doubles (8 bytes each) with an ulong identifier (4 bytes). This is 84 bytes per object at least. So you are trying to serialize 8.4 GB in 100 ms. I really don't think this is achievable no matter what you use.

Now I run protocol protocol tests (they give bytes per second), but I very much doubt that they will give you what you want.

+6
source

You claim that small objects are slower than BinaryFormatter, but every time I measured it, I found the exact opposite, for example:

Serialization Performance Tests Used by WCF Bindings

I conclude, especially with v2 code, that this might be your fastest option. If you can publish your specific test case, I will be happy to see what “up” is ... If you cannot publish it here, if you want to send it directly to me (see Profile), that will be fine too. I don’t know if your set timings are possible in any way, but I’m very sure that I can get much faster than everything that you see.

With v2 code, CompileInPlace gives the fastest result - it allows you to use some IL tricks that it cannot use if compiled into a physical dll.

+4
source

The only reason for serializing objects is to make them compatible with a common vehicle. Network, disk, etc. The principle of serialization never matters, because the transport environment is always much slower than the original processor core processor. Easy to two orders or more.

This is also the reason that attributes are an acceptable compromise. They are also associated with I / O; their initialization data must be read from assembly metadata. This requires reading the disc for the first time.

So, if you set the priority requirements, you need to focus on 99% of the ability of the vehicle. Billion payloads in 100 milliseconds require very promising hardware. Assuming the payload is 16 bytes, you need to move 160 gigabytes per second. This significantly exceeds the memory bus bandwidth inside the device. DDR RAM is about 5 gigabytes per second. One gigabit NIC network adapter moves at a speed of 125 megabytes per second. Commodity hard drive moves at a speed of 65 megabytes per second, without requiring a search.

Your goal is not realistic with current hardware capabilities.

+2
source

You can write custom serialization by doing ISerailizable in your data structures. In any case, you are likely to encounter some kind of “resistance” from the equipment itself to serialize these requirements.

0
source

The proto-buff is really fast, but has limitations. => http://code.google.com/p/protobuf-net/wiki/Performance

0
source

In my experience, the implementation of the Marc protocol buffer is very good. I have not used Jon's . However, you should try to use methods to minimize data, rather than serializing the entire batch.

I would look at the following.

  • If the posts are small, you should see what your entropy is. You may have fields that can be partially or completely deleted. If the message is between two parties, you can benefit from creating a dictionary of both ends.

  • You are using TCP, which has enough overhead without a payload from above. You should minimize this by adding your messages to larger packets and / or look at UDP. Packing yourself in combination with C # 1 may come close to your requirement when you average your overall communication.

  • Is the full data width double or required for convenience? If no extra bits are used, this will be a chance for optimization when converting to a binary stream.

As a rule, universal serialization is great when you have several messages that you must process through a single interface, or you do not know the full implementation information. In this case, it would probably be better to create your own serialization methods to convert a single message structure directly into byte arrays. Since you know the full implementation, both sides of the direct conversion will not be a problem. It also ensures that you can embed the code and prevent the box / unpacking as much as possible.

0
source

This is the fastest approach that I know of. This has its drawbacks. Like a rocket, you would not want this in your car, but it has its own place. How do you need to customize your structures and have the same structure at both ends of your pipe. The structure must be a fixed size, or it becomes more complex than this example.

Here is the perception that I get on my machine (i7 920, 12gb ram) Release mode, without debugging. It uses a 100% processor during the test, so this test is CPU related.

 Finished in 3421ms, Processed 52.15 GB For data write rate of 15.25 GB/s Round trip passed 

.. and code ...

  class Program { unsafe static void Main(string[] args) { int arraySize = 100; int iterations = 10000000; ms[] msa = new ms[arraySize]; for (int i = 0; i < arraySize; i++) { msa[i].d1 = i + .1d; msa[i].d2 = i + .2d; msa[i].d3 = i + .3d; msa[i].d4 = i + .4d; msa[i].d5 = i + .5d; msa[i].d6 = i + .6d; msa[i].d7 = i + .7d; } int sizeOfms = Marshal.SizeOf(typeof(ms)); byte[] bytes = new byte[arraySize * sizeOfms]; TestPerf(arraySize, iterations, msa, sizeOfms, bytes); // lets round trip it. var msa2 = new ms[arraySize]; // Array of structs we want to push the bytes into var handle2 = GCHandle.Alloc(msa2, GCHandleType.Pinned);// get handle to that array Marshal.Copy(bytes, 0, handle2.AddrOfPinnedObject(), bytes.Length);// do the copy handle2.Free();// cleanup the handle // assert that we didnt lose any data. var passed = true; for (int i = 0; i < arraySize; i++) { if(msa[i].d1 != msa2[i].d1 ||msa[i].d1 != msa2[i].d1 ||msa[i].d1 != msa2[i].d1 ||msa[i].d1 != msa2[i].d1 ||msa[i].d1 != msa2[i].d1 ||msa[i].d1 != msa2[i].d1 ||msa[i].d1 != msa2[i].d1) {passed = false; break; } } Console.WriteLine("Round trip {0}",passed?"passed":"failed"); } unsafe private static void TestPerf(int arraySize, int iterations, ms[] msa, int sizeOfms, byte[] bytes) { // start benchmark. var sw = Stopwatch.StartNew(); // this cheats a little bit and reuses the same buffer // for each thread, which would not work IRL var plr = Parallel.For(0, iterations/1000, i => // Just to be nice to the task pool, chunk tasks into 1000s { for (int j = 0; j < 1000; j++) { // get a handle to the struc[] we want to copy from var handle = GCHandle.Alloc(msa, GCHandleType.Pinned); Marshal.Copy(handle.AddrOfPinnedObject(), bytes, 0, bytes.Length);// Copy from it handle.Free();// clean up the handle // Here you would want to write to some buffer or something :) } }); // Stop benchmark sw.Stop(); var size = arraySize * sizeOfms * (double)iterations / 1024 / 1024 / 1024d; // convert to GB from Bytes Console.WriteLine("Finished in {0}ms, Processed {1:N} GB", sw.ElapsedMilliseconds, size); Console.WriteLine("For data write rate of {0:N} GB/s", size / (sw.ElapsedMilliseconds / 1000d)); } } [StructLayout(LayoutKind.Explicit, Size= 56, Pack=1)] struct ms { [FieldOffset(0)] public double d1; [FieldOffset(8)] public double d2; [FieldOffset(16)] public double d3; [FieldOffset(24)] public double d4; [FieldOffset(32)] public double d5; [FieldOffset(40)] public double d6; [FieldOffset(48)] public double d7; } 
0
source

If you do not want to waste time implementing a comprehensive explicit serialization / de-serialization mechanism, try the following: http://james.newtonking.com/json/help/html/JsonNetVsDotNetSerializers.htm ...

In my use with large objects (1 GB + when serializing to disk), I found that the file created by the NewtonSoft library is 4.5 times smaller and takes 6 times less seconds to process than when using BinaryFormatter.

0
source

All Articles