Caffe: how to choose the maximum available batch size that can fit in memory?

I am having some problems due to the small GPU memory (1 Gb), the problem is that at the moment I choose batch_sizeby trial and error, and it seems even if the size of the memory printed in the log line by line Memory required for data:is less than 1 GB can exit out of service.

So my questions are:

  • How to automatically select the maximum available packet size that can fit in the GPU?
  • Is it always better to have more batch_size?
  • How to calculate the peak memory needed for training and forwarding when deploying a network?

UPDATE: I also checked the code , but I'm not sure if this is top_vecs_

+4
source share
1 answer

If the amount of memory printed in the log for the line Memory required for data is less than the total memory of the GPU, it can still fail, because other programs use part of your GPU memory. Under linux, you can use the nvidia-smi command to check statistics. For me, a Ubuntu graphical environment using 97 MB.

  • It is impossible to tell caffe to do this automatically.
  • Yes, for training. It processes more data in one pass, and it will converge in less eras, because SGD will produce more similar results for GD per iteration. For deployment, this is not so important.
  • , : http://cs231n.imtqy.com/convolutional-networks/
+2

All Articles