First you need to create the appropriate tensor validation / testing functions. For a single-layer MPL, it includes nested multiplication with weights and adding offsets (as well as Relu, since you have them in the original model). Define it directly below train forecasts.
valid_prediction = tf.nn.softmax( tf.nn.relu(tf.matmul( tf.nn.relu(tf.matmul(tf_valid_dataset, Wh) + bh)), Wout) + bout))) test_prediction = tf.nn.softmax( tf.nn.relu(tf.matmul( tf.nn.relu(tf.matmul(tf_test_dataset, Wh) + bh)), Wout) + bout)))
These expressions are actually exactly identical to the logit variable defined in your code, only using tf_valid_dataset and tf_test_dataset respectively. You can create intermediate variables to simplify them.
Then you will need to create some validation / validation function to verify the accuracy. The easiest way would be to check the most probable class (roughly speaking, a classification error). Define this outside the schedule / session.
def accuracy(predictions, labels): pred_class = np.argmax(predictions, 1) true_class = np.argmax(labels, 1) return (100.0 * np.sum(pred_class == true_class) / predictions.shape[0])
After that, you can simply pass this precision function inside a single / feed _dict session to calculate the grade / test grade.
print 'Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels) print 'Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)
Sudeep juvekar
source share