Can I share cuda GPU device memory between host processes?

Is it possible to have two or more linux host processes that can access the same device memory? I have two processes that transmit high speed data transfer between them, and I do not want to output data from the GPU to the host in the process. And just pass it to process B, which will return hc back to the GPU.

Combining multiple processes into one process is not an option.

+4
source share
1 answer

My understanding of the CUDA API is that this is not possible. Device pointers are relevant to this CUDA context, and there is no way to share them between processes.

+2
source

Source: https://habr.com/ru/post/1316432/


All Articles