For example, if I have a GPU with 2 GB of RAM, and in my application a large array is allocated, for example 1 GB, as mapped memory (locked host memory mapped to the GPU address space allocated with cudaHostAlloc()), there will be the amount of GPU memory available will be reduced for 1 GB of mapped memory or do I still have (close to) 2 GB, as before distribution and use?
source
share