TensorBoard: How to build a histogram for gradients?

TensorBoard had a function for building histograms of tensors during a session. I want a bar graph for gradients during training.

tf.gradients(yvars,xvars) returns a list of gradients.

However, it tf.histogram_summary('name',Tensor)accepts only tensors, not lists of tensors.

I am currently starting to work. I align all the tensors to the column vector and combine them:

for l in xrange(listlength): col_vec = tf.reshape(grads[l],[-1,1]) g = tf.concat(0,[g,col_vec]) grad_hist = tf.histogram_summary("name", g)

What would be the best way to plot a histogram for a gradient?

This seems to be very important, so I hope TensorFlow will have a special feature for this.

+4
source share
2 answers

@user728291, optimize_loss . optimize_loss -

optimize_loss(
loss,
global_step,
learning_rate,
optimizer,
gradient_noise_scale=None,
gradient_multipliers=None,
clip_gradients=None,
learning_rate_decay_fn=None,
update_ops=None,
variables=None,
name=None,
summaries=None,
colocate_gradients_with_ops=False,
increment_global_step=True
)

global_step , .

from tensorflow.python.ops import variable_scope
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import init_ops
global_step = variable_scope.get_variable(  # this needs to be defined for tf.contrib.layers.optimize_loss()
      "global_step", [],
      trainable=False,
      dtype=dtypes.int64,
      initializer=init_ops.constant_initializer(0, dtype=dtypes.int64))

training_operation = optimizer.minimize(loss_operation)

training_operation = tf.contrib.layers.optimize_loss(
      loss_operation, global_step, learning_rate=rate, optimizer='Adam',
      summaries=["gradients"])

summary = tf.summary.merge_all()

tensorflow /:

summary_writer = tf.summary.FileWriter(logdir_run_x, sess.graph) 
summary_str = sess.run(summary, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, i)
summary_writer.flush()  # evidently this is needed sometimes or scalars will not show up on tensorboard.

logdir_run_x - . , TensorBoard , . OptimizeLoss. , beta .

UPDATE: tf slim, , , , .

optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = slim.learning.create_train_op(loss_operation, optimizer,summarize_gradients=True)

summarize_gradients=True, , . Tensorboard summarize_grads

+3

( quora) - .

optimizer = tf.train.AdamOptimizer(..)
grads = optimizer.compute_gradients(loss)
grad_summ_op = tf.summary.merge([tf.summary.histogram("%s-grad" % g[1].name, g[0]) for g in grads])
grad_vals = sess.run(fetches=grad_summ_op, feed_dict = feed_dict)
writer['train'].add_summary(grad_vals)
0

All Articles