IPC speed and comparison

I am trying to implement a real-time application that includes IPC in different modules. Modules do some heavy data processing. I use Message Queuing as the basis (Activemq) for IPC in the prototype, which is easy (given that I am completely new to IPC), but it is very slow.

Here is my situation:

  • I highlighted part of IPC so that in the future I could change it in other ways.
  • I have 3 weeks to implement an even faster version. ;-(
  • IPC should be fast, but also relatively easy to pick up

I was looking for different IPC approaches: socket, pipe, shared memory. However, I have no experience in IPC, and I definitely will not be able to fail this demo in 3 weeks ... What IPC would be a safe way to start?

Thanks. Lily

+6
shared-memory pipe sockets real-time ipc
source share
4 answers

I myself faced a similar question.

I found the following pages useful - IPC Performance: Named Pipe vs Socket (in particular) and Sockets vs Named Pipes for Local IPC on Windows? .

It seems like the consensus is that shared memory is the way to go if you are really concerned about performance, but if the current system has a message queue, it could be quite ... a different structure. A socket and / or named pipe can be easier to implement, and if they meet your specifications, then you will end there.

+4
source share

On Windows, you can use WM_COPYDATA, a special kind of IPC with shared memory. This is an old but simple method: Process A sends a message containing a pointer to some data in its memory and waits until Process Process B processes (sorry) the message, for example. creates a local copy of the data. This method is pretty fast and works in the Windows 8 Developer Preview (see My standard ). Any data can be transported in this way by serializing it on the sender and deserializing it on the receiver side. It is also simple to implement the message queues of the sender and receiver to make asynchronous communication.

+3
source share

You can check out this blog post at https://publicwork.wordpress.com/2016/07/17/endurox-vs-zeromq/

In principle, it compares Enduro / X, which is built on POSIX queues (IPC kernel queues) compared to ZeroMQ, which can simultaneously send messages to several different transport classes, including. tcp:// (network sockets), ipc:// , inproc:// , pgm:// and epgm:// for multicast.

From the diagrams, you can see that at some point with large data packets, Enduro / X running in queues wins over sockets.

Both systems work well with ~ 400,000 messages per second, but with 5K messages, kernel queues work better.

Source: https://publicwork.wordpress.com/2016/07/17/endurox -vs-ZeroMQ /

(image source: https://publicwork.wordpress.com/2016/07/17/endurox-vs-zeromq/ )


UPDATE: Another update in response to the following comment, I repeated the test to run ZeroMQ on ipc:// , see Image:

Source: https://publicwork.wordpress.com/2016/07/17/endurox -vs-ZeroMQ /

As we can see, ZeroMQ ipc:// better, but again in a certain range Enduro / X shows better results, and then again ZeroMQ takes over.

Thus, I can say that the choice of IPC depends on the work that you plan to do.

Please note that ZeroMQ IPC works on POSIX channels. Although Enduro / x works in POSIX queues.

+3
source share

The best results you get with a shared memory solution.

I recently met the same IPC test . And I think that my results will be useful for anyone who wants to compare IPC performance.

Pipe standard:

 Message size: 128 Message count: 1000000 Total duration: 27367.454 ms Average duration: 27.319 us Minimum duration: 5.888 us Maximum duration: 15763.712 us Standard deviation: 26.664 us Message rate: 36539 msg/s 

FIFO test (named pipes):

 Message size: 128 Message count: 1000000 Total duration: 38100.093 ms Average duration: 38.025 us Minimum duration: 6.656 us Maximum duration: 27415.040 us Standard deviation: 91.614 us Message rate: 26246 msg/s 

Message Queuing Test:

 Message size: 128 Message count: 1000000 Total duration: 14723.159 ms Average duration: 14.675 us Minimum duration: 3.840 us Maximum duration: 17437.184 us Standard deviation: 53.615 us Message rate: 67920 msg/s 

Shared memory test:

 Message size: 128 Message count: 1000000 Total duration: 261.650 ms Average duration: 0.238 us Minimum duration: 0.000 us Maximum duration: 10092.032 us Standard deviation: 22.095 us Message rate: 3821893 msg/s 

TCP Socket Test:

 Message size: 128 Message count: 1000000 Total duration: 44477.257 ms Average duration: 44.391 us Minimum duration: 11.520 us Maximum duration: 15863.296 us Standard deviation: 44.905 us Message rate: 22483 msg/s 

Unix domain socket test:

 Message size: 128 Message count: 1000000 Total duration: 24579.846 ms Average duration: 24.531 us Minimum duration: 2.560 us Maximum duration: 15932.928 us Standard deviation: 37.854 us Message rate: 40683 msg/s 

ZeroMQ Test:

 Message size: 128 Message count: 1000000 Total duration: 64872.327 ms Average duration: 64.808 us Minimum duration: 23.552 us Maximum duration: 16443.392 us Standard deviation: 133.483 us Message rate: 15414 msg/s 
0
source share

All Articles