I am having some problems due to the small GPU memory (1 Gb), the problem is that at the moment I choose batch_sizeby trial and error, and it seems even if the size of the memory printed in the log line by line Memory required for data:is less than 1 GB can exit out of service.
So my questions are:
- How to automatically select the maximum available packet size that can fit in the GPU?
- Is it always better to have more
batch_size? - How to calculate the peak memory needed for training and forwarding when deploying a network?
UPDATE:
I also checked the code , but I'm not sure if this is top_vecs_
source
share