What is the difference between darray and subarray in mpi?

I have a parallel I / O project for a parallel programming class, and I have to implement derived data types. I did not clearly understand the difference between darry and subarray. Can a darray be inferred from dynamically allocated arrays or not? And what is the main difference?

+3
io mpi derived-types
source share
1 answer

Subarray allows you to describe a single block / slice of a large multidimensional array. If each MPI task has one slice / block of a large global array (or if you connect pieces of local arrays between tasks), then MPI_Type_create_subarray is the way to go; the syntax is very simple. To solve problems such as PDE on ordinary grids, this distribution is very common - each processor has its own piece of the global grid, and as many local network cells as possible locally. In the case of MPI-IO, each MPI task would create a submatrix corresponding to it in a piece of the global array, and would use this as it considers to read / write its part of the domain to a file containing all the data.

MPI_Type_create_darray allows you to create more complex arrays of distributed arrays than single ones. For distributed linear algebra computations, it may make sense to distribute some matrices into rows — say, if there are 5 mpi tasks, task 0 gets a row 0, 5, 10 ... and task 1 gets rows 1, 6, 11, etc. Other matrices may be distributed in columns; or you can distribute them in blocks of rows, columns, or both. These data distributions are the same as in the ill-fated HPF , which allows you to define parallel layouts of arrays in this way per array by mass.

The only way I have ever used MPI_Type_create_darray, and indeed the only way I have ever seen it, is to create a large matrix MPI file view to distribute the data in block-cyclic so that the file can be read, and then use scalapack to perform parallel linear algebra operations on a distributed matrix.

+5
source share

All Articles