How tf.gradients work in TensorFlow

Given that I have a linear model as follows, I would like to get a gradient vector with respect to W and b.

# tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # Construct a linear model pred = tf.add(tf.mul(X, W), b) # Mean squared error cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) 

However, if I try something like this, where cost is a function of cost(x,y,w,b) , and I want only gradients with respect to w and b :

 grads = tf.gradients(cost, tf.all_variable()) 

My placeholders will also be included (X and Y). Even if I get a gradient with [x,y,w,b] , how do I know which element in the gradient belongs to each parameter, since it is just a list without names, to which parameter was the derivative taken relative to?

In this question, I use parts of this code , and I built on this question.

+7
machine-learning tensorflow linear-gradients
source share
1 answer

Specifying Documents for tf.gradients

Creates symbolic partial derivatives of the sum ys wrt x in xs.

So this should work:

 dc_dw, dc_db = tf.gradients(cost, [W, b]) 

Here tf.gradients() returns the cost gradient for each tensor in the second argument as a list in the same order.

Read tf.gradients for more details.

+16
source share

All Articles