C ++ + openmp for parallel computing: how to set up visual studio?

I have C ++ - a program that creates an object, and then calls 2 functions of this object that are independent of each other. So it looks like this:

Object myobject(arg1, arg2); double answer1 = myobject.function1(); double answer2 = myobject.function2(); 

I would like these 2 calculations to be done in parallel to save the calculation time. I saw that this can be done using openmp, but could not figure out how to configure it. The only examples I found sent the same calculation (for example, โ€œhello world!โ€) To different cores, and the output was 2 times โ€œhello world!โ€. How can I do this in this situation?

I am using Windows XP with Visual Studio 2005.

+4
source share
3 answers

You should study the construction of sections OpenMP. It works as follows:

 #pragma omp parallel sections { #pragma omp section { ... section 1 block ... } #pragma omp section { ... section 2 block ... } } 

Both blocks can be executed in parallel, given that the team has at least two threads, but the implementation depends on how and where to execute each section.

There is a cleaner solution using OpenMP tasks, but this requires your compiler to support OpenMP 3.0. MSVC only supports OpenMP 2.0 (even in VS 11!).

You must explicitly enable OpenMP support in your project settings. If you are compiling from the command line, the /openmp .

+3
source

If there is not so much memory that is required for your code, you can also use the MPI library. To do this, first of all install MPI on your visual studio from this tutorial Compiling MPI Programs in Visual Studio or from here: MS-MPI with Visual Studio 2008 use this mpi hello world code:

 #include<iostream> #include<mpi.h> using namespace std; int main(int argc, char** argv){ int mynode, totalnodes; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &totalnodes); MPI_Comm_rank(MPI_COMM_WORLD, &mynode); cout << "Hello world from process " << mynode; cout << " of " << totalnodes << endl; MPI_Finalize(); return 0; } 

for your base code, add your functions to it and declare the operation of each process using this example, if the instruction:

 if(mynode== 0 ){function1} if(mynode== 1 ){function2} 

function1 and function2 can be anything you like, executed at the same time; but be careful that these two functions are independent of each other. this is it!

+1
source

First of all, it is the launch and launch of OpenMP with Visual Studio 2005, which is quite old; this requires some action, but it is described in the answer to this question .

Once this is done, it is quite simple to make this simple form of the parallelism task, if you have two methods that are truly completely independent. Please note that the qualifier; if the methods read the same data, that's fine, but if they update any state that uses a different method, or by calling any other routines that do this, then everything will be broken.

While the methods are completely independent, you can use sections for them ( tasks are actually a more modern way of OpenMP 3.0, but you probably cannot get OpenMP 3.0 support for such an old compiler); you will also see that people abuse the parallel for loops in order to achieve this, which at least has the advantage of letting you control the flow assignments, so I include this here for completeness, even if I cannot recommend it:

 #include <omp.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> int f1() { int tid = omp_get_thread_num(); printf("Thread %d in function f1.\n", tid); sleep(rand()%10); return 1; } int f2() { int tid = omp_get_thread_num(); printf("Thread %d in function f2.\n", tid); sleep(rand()%10); return 2; } int main (int argc, char **argv) { int answer; int ans1, ans2; /* using sections */ #pragma omp parallel num_threads(2) shared(ans1, ans2, answer) default(none) { #pragma omp sections { #pragma omp section ans1 = f1(); #pragma omp section ans2 = f2(); } #pragma omp single answer = ans1+ans2; } printf("Answer = %d\n", answer); /* hacky appraoch, mis-using for loop */ answer = 0; #pragma omp parallel for schedule(static,1) num_threads(2) reduction(+:answer) default(none) for (int i=0; i<2; i++) { if (i==0) answer += f1(); if (i==1) answer += f2(); } printf("Answer = %d\n", answer); return 0; } 

Doing this gives

 $ ./sections Thread 0 in function f1. Thread 1 in function f2. Answer = 3 Thread 0 in function f1. Thread 1 in function f2. Answer = 3 
0
source

Source: https://habr.com/ru/post/1415165/


All Articles