When using RMSE losses in TensorFlow, I get very small loss values ​​in excess of 1

Hello, I have a network that produces logic / outputs as follows:

logits = tf.placeholder(tf.float32, [None, 128, 64, 64]) // outputs y = tf.placeholder(tf.float32, [None, 128, 64, 64]) // ground_truth, targets 

-> y truth truth values ​​are reduced from [0, 255] to [0, 1] to increase strength, as I read that it is better to use the range [0, 1]

Now I want to calculate RMSE / EuclideanLoss as follows:

 loss = tf.reduce_mean(tf.square(logits - y)) 

or

 loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(y, logits)))) 

not sure which one is better.

At the same time, my loss values ​​start at about 1. , and then quickly move to 2.5e-4 . When I use EuclideanLoss in Caffe for the same network, my loss values ​​start at about 1000 and drop to 200 . Am I doing something wrong in Tensorflow or why are small loss values? I can not really track the loss values ​​in tensorboard , since they are so small. Can anybody help me?

+1
source share
1 answer

The first loss you offer is the best (the second introduces unnecessary sqrt ).

Values ​​equal to or less than 1 seem to be the only possible values, since the range of values ​​you provide is between 0 and 1, so the largest possible error is 1.

If you have problems visualizing the loss in the strain gauge, try to show the graphs using the log scale (one of the two buttons under the graphs)

+1
source

All Articles