Teaching tensor motion becomes slower and slower when the iteration is more than 10,000. Why?

I transfer data to the graph using the input-pipeline methods, and tf.train.shuffle_batch implemented to generate batch data. However, as training progresses, the tensor flow becomes slower and slower for subsequent iterations. I am confused about what is the main reason for this? Thank you so much! My code snippet:

 def main(argv=None): # define network parameters # weights # bias # define graph # graph network # define loss and optimization method # data = inputpipeline('*') # loss # optimizer # Initializaing the variables init = tf.initialize_all_variables() # 'Saver' op to save and restore all the variables saver = tf.train.Saver() # Running session print "Starting session... " with tf.Session() as sess: # initialize the variables sess.run(init) # initialize the queue threads to start to shovel data coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) print "from the train set:" for i in range(train_set_size * epoch): _, d, pre = sess.run([optimizer, depth_loss, prediction]) print "Training Finished!" # Save the variables to disk. save_path = saver.save(sess, model_path) print("Model saved in file: %s" % save_path) # stop our queue threads and properly close the session coord.request_stop() coord.join(threads) sess.close() 
+7
performance python tensorflow
source share
1 answer

When you exercise, you should do sess.run only once. Recommend trying something like this, hope this helps:

 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(train_set_size * epoch): train_step.run([optimizer, depth_loss, prediction]) 
+1
source share

All Articles