This is not possible because GPUs work differently than processors, which means you cannot give them the same instructions as the processor.
Nvidia showcases a good show with this video describing the difference between processor and GPU processing. In fact, the difference is that on a GPU there are usually several orders of magnitude more cores than processors.
Your example is one that can be extended to GPU code as it is very parallel.
Here is some code for generating random numbers (although they are usually not distributed) http://cas.ee.ic.ac.uk/people/dt10/research/rngs-gpu-mwc64x.html
Once you create random numbers, you can split them into pieces, and then sum each of the pieces in parallel, and then add the sums of the pieces to get the total amount. Is it possible to calculate the amount in parallel in OpenCL?
I understand that your code will make a vector of random numbers and its sum in sequential and parallel operation 10 times, but with GPU processing, having only 10 tasks, is not very effective, since you will leave as many inactivity cores.
Dean MacGregor
source share