MPI on a single dualcore machine

What happens if I run an MPI program that requires 3 nodes (i.e. mpiexec -np 3./Program) on one machine that has 2 cpu?

+4
source share
2 answers

It depends on your implementation of MPI, of course. Most likely, he will create three processes and use shared memory for messaging. This will work very well: the operating system will send two processors across three processes and always execute one of the ready-made processes. If a process expects to receive a message, it is blocked, and the operating system will schedule one of the other two processes to run - one of which will be sending a message.

+8
source
Martin gave the correct answer, and I spat on it, but I just want to add a few subtleties that are too long to fit into the comment field.

There is nothing wrong with having more processes than cores; you probably have been on your machine for decades before starting any MPI program. You can try with any executable command line executable that you sit around something like mpirun -np 24 hostname or mpirun -np 17 ls in the linux window and you will get 24 copies of your hostname or 17 (probably alternating) directories and everything will work well.

In MPI, this use of more processes than cores is commonly called "rewriting." The fact that it has a special name already suggests that this is a special case. Types of programs written using MPI usually work best when each process has its own kernel. There are situations when this should not be, but it is (of course) ordinary. And for this reason, for example, OpenMPI is optimized for the ordinary case - it simply makes a strong assumption that each process has its own kernel, and therefore it is very aggressive to use the CPU to poll to find out if a message still appeared (since it shows that he does nothing important). This is not a problem, and can be easily disabled if OpenMPI knows that it is full ( http://www.open-mpi.org/faq/?category=running#oversubscribing ). This is a design decision, and it improves the performance of the vast majority of cases.

For historical reasons, I am more familiar with OpenMPI than with MPICH2, but I understand that MPICH2 defaults for the most part forgive the redundant list, but I think that even more aggressive waiting can be enabled there.

In any case, this is a long way of saying that yes, what you are doing is fine, and if you see any strange problems when switching MPI or even MPI versions, do a quick search to see if there are any options there, which need to be changed for this case.

+7
source

All Articles