Monte Carlo on the GPU

Today I talked with a friend of mine who told me that he was trying to do some Monte Carlo simulations using the GPU. Interestingly, he told me that he wanted to draw numbers randomly on different processors and suggested that they were uncorrelated. But they were not .

The question is, is there a method for drawing independent sets of numbers on multiple GPUs? He believed that the adoption of a different seed for each of them would solve the problem, but this is not so.

If any clarification is needed, please let me know, I will ask him to provide more detailed information.

+5
source share
3 answers

To generate completely independent random numbers, you need to use a parallel random number generator. Essentially, you pick one seed and generate M independent streams of random numbers. Thus, on each of the M-GPUs, you can generate random numbers from independent threads.

When working with multiple GPUs, you need to know what you want:

  • independent threads in GPUs (if RNs are generated by each GPU)
  • independent flows between GPUs.

, GPU (. ). GPU RN, , GPU, .

CPU, :

  • .
  • , , .
  • GPU RN .

: ?

. , , . , i- (i-1) . , , . , , .

+5

iid . Cuda NVIDIA Curend, Mersenne Twister.

, , 100 , 10 (R ^ 10) -

__global__ void setup_kernel(curandState *state,int pseed)
{
    int id =  blockIdx.x * blockDim.x + threadIdx.x;
    int seed = id%10+pseed;

    /* 10 differents seed for uncorrelated rv, 
    a different sequence number,    no offset */
    curand_init(seed, id, 0, &state[id]);
}
+3

If you take any “good” generator (for example, Mersenne Twister, etc.), two sequences with different random seeds will be uncorrelated, either on the GPU or processor. So I'm not sure what you mean by saying that taking different seeds on different GPUs was not enough. Would you specify?

0
source

All Articles