I have a loss function implemented in TensorFlow that calculates the standard error. All tensors used to calculate the target are of type float64, and therefore the loss function itself has dtype float64. In particular,
print cost ==> Tensor("add_5:0", shape=TensorShape([]), dtype=float64)
However, when I try to minimize, I get a value error by tensor type:
GradientDescentOptimizer(learning_rate=0.1).minimize(cost) ==> ValueError: Invalid type <dtype: 'float64'> for add_5:0, expected: [tf.float32].
I donβt understand why the expected dtype of the tensor is a single precision float, when all the variables leading to the calculation are of type float64. I have confirmed that when I force all variables to float32, the calculation is done correctly.
Does anyone have an idea why this could happen? My computer is a 64 bit machine.
Here is an example that reproduces the behavior
import tensorflow as tf import numpy as np
tensorflow
user1936768
source share