I train CNN with TensorFlow for medical imaging applications.
Since I donβt have a lot of data, I try to apply random modifications to my training series during the training cycle in order to artificially increase my set of training materials. I executed the following function in another script and called it in my training party:
def randomly_modify_training_batch(images_train_batch, batch_size): for i in range(batch_size): image = images_train_batch[i] image_tensor = tf.convert_to_tensor(image) distorted_image = tf.image.random_flip_left_right(image_tensor) distorted_image = tf.image.random_flip_up_down(distorted_image) distorted_image = tf.image.random_brightness(distorted_image, max_delta=60) distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8) with tf.Session(): images_train_batch[i] = distorted_image.eval()
This code works well for making changes to my images.
The problem is this:
After each iteration of my training loop (feedfoward + backpropagation), applying the same function to my next training batch steadily takes 5 seconds longer than the last time.
It takes about 1 second to process and achieve more than a minute of processing after a bit of more than 10 iterations.
What causes this slowdown? How can I prevent this?
(I suspect something with "distorted_image.eval ()", but I'm not quite sure. Every time I open a new session? Should TensorFlow not automatically close the session, as I use in "with tf. Session ()"?)
optimization tensorflow
Julep
source share