Protocol buffers vs Thrift: how fast is the server / client faster?

So, we want to create a server / client system in C ++, and it is unclear whether buffering or saving the Google protocol will work faster with our prototype. We want to use TCP sockets as the level of communication for communication over a local subnet (not over a wide Internet). It will work on Linux / OS X / Windows.

We basically need a simple asynchronous message running in each direction, but in the future we may need RPC-style responses. Most of our messages are small, but some of them will have a large payload of about 100 thousand 500 thousand (It will just be a large opaque buffer that accompanies the message if we had to call them in the IDL).

I know that we want the message descriptor / marshalling / unmarshalling that both of them offer, but I do not understand what other parts are needed to quickly create a working server / client.

Do I collect these realistic supplies for connecting TCP / IP sockets for sending and receiving messages, while the protocol buffers process only the marshalling layer and do not know anything about sockets?

If this is true, then using protocol buffers you will have to write a mini-protocol on top of it to wrap some headers and / or footers around the messages to determine where to stop, and the next how to arrive through the socket (and the more difficult problem of re synchronization if something is longer / shorter than expected). Are there open source packages that supply these layers (preferably using boost::asio )?

+4
source share

All Articles