I am trying to use various loss functions in a tensor flow.
The loss function I want is an epsilon insensitive function (this is componentwise):
if(|yData-yModel|<epsilon): loss=0 else loss=|yData-yModel|
I tried this solution:
yData=tf.placeholder("float",[None,numberOutputs]) yModel=model(... epsilon=0.2 epsilonTensor=epsilon*tf.ones_like(yData) loss=tf.maximum(tf.abs(yData-yModel)-epsilonTensor,tf.zeros_like(yData)) optimizer = tf.train.GradientDescentOptimizer(0.25) train = optimizer.minimize(loss)
I also used
optimizer = tf.train.MomentumOptimizer(0.001,0.9)
I see no errors in implementation. However, it does not converge, and loss = tf.square (yData-yModel) converges, and loss = tf.maximum (tf.square (yData-yModel) -epsilonTensor, tf.zeros_like (yData)) also converges.
So, I also tried something simpler loss = tf.abs (yData-yModel), and it also doesn't converge. Am I mistaken or have problems with the non-differentiability of abs to zero or something else? What happens to the abs function?
tensorflow
DanielTheRocketMan
source share