Cross Platform IPC

I am looking for suggestions on possible mechanisms of the IPC, which:

  • Cross platform (at least Win32 and Linux)
  • Simple implementation in C ++ , as well as the most common scripting languages (perl, ruby, python, etc.).
  • Finally, easy to use in terms of programming!

What are my options? I program under Linux, but I would like me to write in order to port them to other OSs in the future. I was thinking about using sockets, named pipes, or something like DBus.

+59
c ++ python linux cross-platform ipc
Sep 13 '08 at 16:10
source share
16 answers

In terms of speed, pipes are the best cross-platform IPC engine. This assumes, however, that you want cross-platform IPC on the same computer. If you want to be able to talk to processes on remote computers, you'll want to look at using sockets instead. Fortunately, if you are talking about TCP, at least sockets and pipes behave about the same. Although the APIs are different for configuring and connecting to them, they both act as data streams.

However, the hard part is not the communication channel, but the messages you send. You really want to look at something that will perform validation and parsing for you. I recommend looking at Protocol Buffers . You basically create a specification file that describes the object that you want to pass between processes, and there is a compiler that generates code in several different languages ​​for reading and writing objects that meet the specification. This is much simpler (and less error prone) than trying to come up with a messaging protocol and parser.

+45
Sep 15 '08 at 19:22
source share

For C ++, check Boost IPC .
Perhaps you can create or find some bindings for scripting languages.

Otherwise, if it is really important to be able to interact with scripting languages, it is best to use files, pipes or sockets, or even a higher level abstraction, such as HTTP.

+14
Sep 13 '08 at 16:19
source share

Why not a D-Bus? This is a very simple messaging system that works on almost all platforms and is designed to provide reliability. At this point, almost every scripting language is supported.

http://freedesktop.org/wiki/Software/dbus

+9
Sep 16 '08 at 17:04
source share

You might want to try YAMI , it is very simple, but functional, portable and comes bundled with several languages

+7
Sep 15 '08 at 22:11
source share

What about Facebook Thrift ?

Thrift is a software environment for scalable cross-language services. It combines a software stack with a code generation engine to create services that work efficiently and seamlessly between C ++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C #, Cocoa, Smalltalk, and OCaml.

+5
13 Sept. '08 at 16:21
source share

I think you need something based on sockets.

If you want RPC, not just IPC, I would suggest something like XML-RPC / SOAP, which works through HTTP and can be used from any language.

+5
Sep 13 '08 at 17:17
source share

TCP sockets for local FTW.

+4
Sep 13 '08 at 17:03
source share

If you want to try something a little different, there is the ICE platform ZeroC . It works with open source and is supported in almost all operating systems that you can think of, as well as in language support for C ++, C #, Java, Ruby, Python, and PHP. Finally, it is very easy to manage (language comparisons are adapted for a natural transition to each language). It is also fast and efficient. There's even a cut out version for devices.

+4
Sep 15 '08 at 19:39
source share

If you want a portable, easy to use, multilingual and LGPL , I would recommend you ZeroMQ :

  • Surprisingly fast, almost linear scalable and still simple.
  • Suitable for simple and complex systems / architectures.
  • Very powerful communication templates are available: REP-REP, PUSH-PULL, PUB-SUB, PAIR-PAIR.
  • You can configure the transport protocol to make it more efficient if you transfer messages between threads ( inproc:// ), processes ( ipc:// ) or machines ( {tcp|pgm|epgm}:// ), with a smart option to shave off part of the protocol overhead if connections are made between VMware virtual machines ( vmci:// ).

For serialization, I would suggest MessagePack or protocol buffers (which are already mentioned differently), depending on your needs.

+4
Jul 30 '14 at 10:22
source share

It does not become simpler than using pipes that are supported in every OS that I know about, and they can be accessed in almost any language.

Check out this tutorial.

+3
Sep 13 '08 at 16:27
source share

Distributed computing is usually complex, and you are advised to use existing libraries or frameworks instead of reinventing the wheel. The previous poster has already listed a couple of these libraries and frameworks. Depending on your needs, you can choose a very low level (for example, sockets) or a high level framework (for example, CORBA). There can be no general β€œuse of this” answer. You need to talk about distributed programming, and then find it much easier to choose the right library or framework for the job.

There is a wildly used C ++ framework for distributed computing called ACE and CORBA ORB TAO (which is based on ACE). There are very good books on ACE http://www.cs.wustl.edu/~schmidt/ACE/ so you can take a look. Be careful!

+3
Dec 08 '08 at 22:55
source share

YAMI - Another messaging infrastructure is a lightweight messaging system and network.

+3
Jan 25 '10 at 23:52
source share

I can suggest you use the plibsys C library. It is very simple, lightweight and cross-platform. Released under LGPL. It provides:

  • called system-wide shared memory areas (System V, POSIX, and Windows),
  • named system semaphores for access synchronization (versions of System V, POSIX and Windows);
  • a named system-wide implementation of a shared buffer based on shared memory and semaphore;
  • (TCP, UDP, SCTP) with support for IPv4 and IPv6 (UNIX and Windows implementation).

Easy to use library with good documentation. Since it is written in C, you can easily create bindings from scripting languages.

If you need to transfer large data sets between processes (especially if speed is important), it is better to use shared memory to transfer the data and sockets in order to notify the process of data readiness. You can do it as follows:

  • a process places data in a shared memory segment and sends a notification through a socket to another process; since notice is usually very small, the overhead is minimal;
  • another process receives a notification and reads data from a shared memory segment; after that, he sends a notification that the data has been returned to the first process so that he can submit more data.

This approach can be implemented in a cross-platform manner.

+2
Jun 19 '16 at 18:54
source share

Python has a pretty good IPC library: see https://docs.python.org/2/library/ipc.html

0
Sep 16 '08 at 17:07
source share

You might want to check openbinder .

0
Mar 21 '11 at 7:52
source share

goob protobufs is a very bad idea and you want to maintain and debug code easily. it's too easy for people to abuse it and use it to pollute your code. proto files are good, but basically this is the same as the structure header file, and the code that it generates is complete crap, making you wonder if this is really a hidden attack tool to sabotage software projects instead of automating them. After you use it for a while, it is almost impossible to remove it from your code. you better just use the header file of fixed-format structures that are easily debugged.

if you really need compression, remotely switch to address / data mapping of the record structure ... then packets are just a set of address / data pairs ... also a structure that is very easy to automate with your own perl scripts that create code that is readable person and debugged

-5
Sep 30 '13 at 17:37
source share



All Articles