Given that I have a linear model as follows, I would like to get a gradient vector with respect to W and b.
# tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # Construct a linear model pred = tf.add(tf.mul(X, W), b) # Mean squared error cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
However, if I try something like this, where cost is a function of cost(x,y,w,b) , and I want only gradients with respect to w and b :
grads = tf.gradients(cost, tf.all_variable())
My placeholders will also be included (X and Y). Even if I get a gradient with [x,y,w,b] , how do I know which element in the gradient belongs to each parameter, since it is just a list without names, to which parameter was the derivative taken relative to?
In this question, I use parts of this code , and I built on this question.
machine-learning tensorflow linear-gradients
user3139545
source share