There are three problems: one is with distribution, one of which is related to distribution, and the other is using MPI, and none of the other answers affects them all.
The first and most serious problem is where things are distributed. As @davidb correctly pointed out, as it stands, you only allocate memory for task zero, so other tasks do not have memory to receive the broadcast.
As with 2d distributions in C, your code is almost exactly right. In this block of code:
array = (float **)malloc(10*sizeof(float)); for(i=0;i<10;i++) array[i] = (float *)malloc(10*sizeof(float));
the only real problem is that the first malloc should have 10 floating pointers, not a float:
array = (float **)malloc(10*sizeof(float *)); for(i=0;i<10;i++) array[i] = (float *)malloc(10*sizeof(float));
This was pointed out by @eznme. The first method may work depending on which memory model you are compiling / linking, etc., and will almost certainly work on 32-bit OS / machines, but just because it works, this does not always mean :)
Now, the last problem is that you declared an excellent 2d array in C, but that is not what MPI expects. When you make this call
MPI_Bcast(array,10*10,MPI_FLOAT,0,MPI_COMM_WORLD);
you tell MPI to send 100 contiguous floats pointed to by array . You noticed that the library routine has no way of knowing if the array is a pointer to the beginning of a 2d or 3d or 12d array or what the individual dimensions are; he does not know if he should follow the signs, and if this happened, he would not know how many of them follow.
So, you want to send a floating-point pointer to 100 adjacent floats - and in the usual way of allocating pseudo-multidimensional arrays (*), this is not necessary for you. You do not necessarily know how far the 2nd row is from the 1st row in this layout - or even in which direction. So what you really want to do is something like this:
int malloc2dfloat(float ***array, int n, int m) { float *p = (float *)malloc(n*m*sizeof(float)); if (!p) return -1; (*array) = (float **)malloc(n*sizeof(float*)); if (!(*array)) { free(p); return -1; } for (int i=0; i<n; i++) (*array)[i] = &(p[i*m]); return 0; } int free2dfloat(float ***array) { free(&((*array)[0][0])); free(*array); return 0; }
Thus, and only in this way you guarantee that the memory will be continuous. Then you can do
float **array; malloc2dfloat(&array, 10, 10); if (rank == 0) { for(i=0;i<10;i++) for(j=0;j<10;j++) array[i][j]=i+j; } MPI_Bcast(&(array[0][0]), 10*10, MPI_FLOAT, 0, MPI_COMM_WORLD);
Note that for arbitrary data composition, you can still make a Bcast by specifying an MPI data type that describes how the 2d array is actually laid out in memory; but itβs easier and closer to what you really want.
(*) the real problem is that the C and C languages ββdo not have real multi-d arrays as first-class objects, which is great for a system programming language, but it is naturally not annoying when programming.