I am calculating the equation A * x = B, where A is the matrix and B is the vector, x is the answer (unknown) vector.
Hardware Specifications: Intel i7 3630QM (4 cores), nVidia GeForce GT 640M (384 CUDA cores)
Here is an example:
>> A=rand(5000); >> B=rand(5000,1); >> Agpu=gpuArray(A); >> Bgpu=gpuArray(B); >> tic;A\B;toc; Elapsed time is 1.382281 seconds. >> tic;Agpu\Bgpu;toc; Elapsed time is 4.775395 seconds.
Somehow the GPU is much slower ... Why? It is also slower in the calculations of FFT, INV, LU, which should be related to matrix division.
However, when the matrix is โโmultiplied (the same data), the GPU is much faster:
>> tic;A*B;toc; Elapsed time is 0.014700 seconds. >> tic;Agpu*Bgpu;toc; Elapsed time is 0.000505 seconds.
The main question is: why is the A \ B GPU (mldivide) so slow compared to the CPU?
UPDATED
Here are some more results when A, B (on CPU), AA, BB (on GPU) are rand (5000):
>> tic;fft(A);toc; Elapsed time is *0.117189 *seconds. >> tic;fft(AA);toc; Elapsed time is 1.062969 seconds. >> tic;fft(AA);toc; Elapsed time is 0.542242 seconds. >> tic;fft(AA);toc; Elapsed time is *0.229773* seconds. >> tic;fft(AA);toc;
Oily times are stable times. However, the GPU is almost twice as slow. By the way, why is the GPU even slower in the first two attempts? Is it compiled twice first?
Moreover:
>> tic;sin(A);toc; Elapsed time is *0.121008* seconds. >> tic;sin(AA);toc; Elapsed time is 0.020448 seconds. >> tic;sin(AA);toc; Elapsed time is 0.157209 seconds. >> tic;sin(AA);toc; Elapsed time is *0.000419 *seconds
After two calculations, the GPU calculates sins incredibly faster.
So, why is the GPU so slow in matrix division, fft and similar calculations, although it works so fast in matrix multiplication and trigonometry? The question really shouldn't be that way ... The GPU should be faster in all of these calculations, because Matlab has released overlapping functions (mldivide, fft) for the GPU.
Can someone help me solve these problems please? :)