How to manage memory when using Keras with tensorflow backend?

I created a wrapper class that initializes the keras.models.Sequential model and has several methods for starting the learning process and monitoring progress. I create an instance of this class in the main file and complete the learning process. Enough worldly things.

My question is :

How to free all GPU memory allocated by tensorflow . I tried the following with no luck:

 import keras.backend.tensorflow_backend as K with K.get_session() as sess: K.set_session(sess) import tensorflow as tf from neural_net import NeuralNet with tf.device('/gpu:0'): nn = NeuralNet('config', train_db_path, test_db_path) nn.train(1000, 1) print 'Done' K._SESSION.close() K.set_session(None) 

Even after closing the session and resetting to None , nvidia-smi does not reflect any reduction in memory usage. Any ideas?

Idea

It would be wise to add the __exit__ method to my NeuralNet class and create it as:

 with NeuralNet() as nn: nn.train(1000, 1) 

How can I free the resources of the keras model in this method?

Test environment

I am using an iPython Notebook on Ubuntu 14.04 with 3 GTX 960 GPUs.

Link:

+7
python deep-learning gpu tensorflow keras
source share
1 answer

The following works for me to reinitialize the state of Keras layers in my Jupyter laptop for each run:

 from keras import backend as K K.clear_session() sess = tf.Session() K.set_session(sess) 

In addition, the schedule is called and reset every time the laptop is running, using:

 graphr = K.get_session().graph with graphr.as_default(): #...graph building statements... 

Note. I'm still trying to wrap my head around the Keras and tenorflow concepts (I believe they are poorly described in the documentation and examples), but the above works.

+2
source share

All Articles