Error Minimize Losses

I have a loss function implemented in TensorFlow that calculates the standard error. All tensors used to calculate the target are of type float64, and therefore the loss function itself has dtype float64. In particular,

print cost ==> Tensor("add_5:0", shape=TensorShape([]), dtype=float64) 

However, when I try to minimize, I get a value error by tensor type:

 GradientDescentOptimizer(learning_rate=0.1).minimize(cost) ==> ValueError: Invalid type <dtype: 'float64'> for add_5:0, expected: [tf.float32]. 

I don’t understand why the expected dtype of the tensor is a single precision float, when all the variables leading to the calculation are of type float64. I have confirmed that when I force all variables to float32, the calculation is done correctly.

Does anyone have an idea why this could happen? My computer is a 64 bit machine.

Here is an example that reproduces the behavior

 import tensorflow as tf import numpy as np # Make 100 phony data points in NumPy. x_data = np.random.rand(2, 100) # Random input y_data = np.dot([0.100, 0.200], x_data) + 0.300 # Construct a linear model. b = tf.Variable(tf.zeros([1], dtype=np.float64)) W = tf.Variable(tf.random_uniform([1, 2], minval=-1.0, maxval=1.0, dtype=np.float64)) y = tf.matmul(W, x_data) + b # Minimize the squared errors. loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # For initializing the variables. init = tf.initialize_all_variables() # Launch the graph sess = tf.Session() sess.run(init) # Fit the plane. for step in xrange(0, 201): sess.run(train) if step % 20 == 0: print step, sess.run(W), sess.run(b) 
+7
tensorflow
source share
1 answer

The tf.train.GradientDescentOptimizer class tf.train.GradientDescentOptimizer supports training on 32-bit floating-point variables and loss values.

However, it seems that the kernel is implemented for double precision values, so it should be possible to train in your scenario.

A quick workaround would be to define a subclass that also supports tf.float64 values:

 class DoubleGDOptimizer(tf.train.GradientDescentOptimizer): def _valid_dtypes(self): return set([tf.float32, tf.float64]) 

... and then use DoubleGDOptimizer instead of tf.train.GradientDescentOptimizer .

EDIT: You need to take a course like tf.constant(learning_rate, tf.float64) to do this job.

( NB . This is not a supported interface, and it may be changed in the future, but the team knows about the desire to optimize the floats with double precision and intends to provide a built-in solution.)

+4
source share

All Articles