Validation and Testing with TensorFlow

I created one hidden neural network with a pyramidal architecture using TensorFlow. Here is the code:

num_classes = 10 image_size = 28 #Read the data train_dataset, train_labels, valid_dataset, valid_labels, test_dataset, test_labels = OpenDataSets("...") #Create and convert what is needed. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) #Then I create the NN. Wh = tf.Variable(tf.truncated_normal([image_size * image_size, image_size * image_size / 2])) bh = tf.Variable(tf.truncated_normal([image_size * image_size / 2])) hidden = tf.nn.relu(tf.matmul(tf_train_dataset, Wh) + bh) Wout = tf.Variable(tf.truncated_normal([image_size * image_size / 2, num_labels])) bout = tf.Variable(tf.truncated_normal([num_labels])) logits = tf.nn.relu(tf.matmul(hidden, Wout) + bout) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) train_prediction = tf.nn.softmax(logits) 

And now I am training my NN:

 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() for step in range(1000): offset = (step * batch_size) % (train_labels.shape[0] - batch_size) batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict) 

Now I would like to check and check my NN after training. But I don't know how to create a new feed_dict and use session.run for validation / testing.

Thank you for your help!

+7
python neural-network tensorflow
source share
1 answer

First you need to create the appropriate tensor validation / testing functions. For a single-layer MPL, it includes nested multiplication with weights and adding offsets (as well as Relu, since you have them in the original model). Define it directly below train forecasts.

 valid_prediction = tf.nn.softmax( tf.nn.relu(tf.matmul( tf.nn.relu(tf.matmul(tf_valid_dataset, Wh) + bh)), Wout) + bout))) test_prediction = tf.nn.softmax( tf.nn.relu(tf.matmul( tf.nn.relu(tf.matmul(tf_test_dataset, Wh) + bh)), Wout) + bout))) 

These expressions are actually exactly identical to the logit variable defined in your code, only using tf_valid_dataset and tf_test_dataset respectively. You can create intermediate variables to simplify them.

Then you will need to create some validation / validation function to verify the accuracy. The easiest way would be to check the most probable class (roughly speaking, a classification error). Define this outside the schedule / session.

 def accuracy(predictions, labels): pred_class = np.argmax(predictions, 1) true_class = np.argmax(labels, 1) return (100.0 * np.sum(pred_class == true_class) / predictions.shape[0]) 

After that, you can simply pass this precision function inside a single / feed _dict session to calculate the grade / test grade.

 print 'Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels) print 'Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels) 
+8
source share

All Articles