Assuming that you know the number of elements that should be stored on the GPU, you can easily calculate the amount of memory needed to store these elements.
A simple example:
import numpy as np import theano.tensor as T T.config.floatX = 'float32' dataPoints = np.random.random((5000, 256 * 256)).astype(T.config.floatX)
Assuming the constant above the head is 0, it will print:
>>> Data will need 1.22 GBs of free memory
If you use an NVIDIA graphics card and installed CUDA on your computer, you can easily get the total amount of free memory on your GPU using the following line of code:
import theano.sandbox.cuda.basic_ops as sbcuda import numpy as np import theano.tensor as T T.config.floatX = 'float32' GPUFreeMemoryInBytes = sbcuda.cuda_ndarray.cuda_ndarray.mem_info()[0] freeGPUMemInGBs = GPUFreeMemoryInBytes/1024./1024/1024 print "Your GPU has %s GBs of free memory" % str(freeGPUMemInGBs)
Then the output is in the following format (for my machine here):
>>> Your GPU has 11.2557678223 GBs of free memory >>> The tasks above used 1.22077941895 GBs of your GPU memory. The available memory is 10.0349884033 GBs
By controlling the amount of free memory and calculating the size of your model / data, you can better use the GPU memory. However, be aware of the problem of memory fragmentation , as this may cause a MemoryError unexpectedly.
source share