I mean the Deep MNIST tutorial for experts defined by the tensor flow. I have a problem in Teach and evaluate part of this tutorial. There they gave a code example as follows.
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv),reduction_indices=[1])) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess.run(tf.initialize_all_variables()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
So, in this code segment, they used accuracy.eval() at a time. And other time train_step.run() . As I know, both of them are tensor variables.
And in some cases, I saw how
sess.run(variable, feed_dict)
So my question is what are the differences between these three implementations. And how can I know what to use when ...?
Thanks!
Ramesh-x
source share