GPU dynamic memory allocation

Is it possible to dynamically allocate memory on a GPU global memory inside the kernel?
I don’t know how big my answer will be, so I need a way to allocate memory for each part of the answer. CUDA 4.0 allows us to use RAM ... is this a good idea or will it slow down?

+5
source share
2 answers

You can use malloc inside the kernel. check the following, which is taken from the nvidia cuda manual:

__global__ void mallocTest() 
{ 
  char* ptr = (char*)malloc(123); 
  printf("Thread %d got pointer: %p\n", threadIdx.x, ptr); 
  free(ptr); 
} 
void main() 
{ 
  cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); 
  mallocTest<<<1, 5>>>(); 
  cudaThreadSynchronize(); 
} 

will output: 
Thread 0 got pointer: 00057020 
Thread 1 got pointer: 0005708c 
Thread 2 got pointer: 000570f8 
Thread 3 got pointer: 00057164 
+10
source

Of CUDA 4.0, you can use the operators newand deleteof the C ++ instead malloc, and freein c.

+1
source

All Articles