How to set rmse cost function in tensor flow

I have a cost function in tenorflow.

activation = tf.add(tf.mul(X, W), b) cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function 

I am trying this example . How can I change it to a cost function?

+12
source share
4 answers
 tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(targets, outputs)))) 

And a little simplified (TensorFlow overloads the most important operators):

 tf.sqrt(tf.reduce_mean((targets - outputs)**2)) 
+44
source

The formula for the standard error is :

enter image description here

The way to implement this in TF is tf.sqrt(tf.reduce_mean(tf.squared_difference(Y1, Y2))) .


It is important to remember that there is no need to minimize the loss of RMSE using the optimizer. With the same result, you can minimize only tf.reduce_mean(tf.squared_difference(Y1, Y2)) or even tf.reduce_sum(tf.squared_difference(Y1, Y2)) , but since they have a smaller operation schedule, they will be optimized faster.

But you can use this function if you just want to know the value of RMSE.

+15
source

(1) Are you sure you need it? Minimizing l2 loss will give you the same result as minimizing the RMSE error. (Go through the math: you don’t need to take the square root, since minimizing x ^ 2 still minimizes x for x> 0, and you know that the sum of the heap of squares is positive. Minimizing x * n minimizes x for constant n).

(2) If you need to know the numerical value of the RMSE error, then implement it directly from the RMSE definition:

 tf.sqrt(tf.reduce_sum(...)/n) 

(You need to know or calculate n - the number of elements in the sum and correctly set the reduction axis in the reduce_sum call).

+5
source

Now we have tf.losses.mean_squared_error

therefore

 RMSE = tf.sqrt(tf.losses.mean_squared_error(prediction, label)) 
+5
source

All Articles