Renormalize the weight matrix using TensorFlow

I would like to add a maximum rate limit for several weight matrices in my TensorFlow, ala Torch renorm .

If the L2 norm of any neuron weight matrix exceeds max_norm , I would like to scale its weights so that their L2 norm is exactly max_norm .

What is the best way to express this with TensorFlow?

+6
source share
3 answers

Here is a possible implementation:

 import tensorflow as tf def maxnorm_regularizer(threshold, axes=1, name="maxnorm", collection="maxnorm"): def maxnorm(weights): clipped = tf.clip_by_norm(weights, clip_norm=threshold, axes=axes) clip_weights = tf.assign(weights, clipped, name=name) tf.add_to_collection(collection, clip_weights) return None # there is no regularization loss term return maxnorm 

Here's how you use it:

 from tensorflow.contrib.layers import fully_connected from tensorflow.contrib.framework import arg_scope with arg_scope( [fully_connected], weights_regularizer=max_norm_regularizer(1.5)): hidden1 = fully_connected(X, 200, scope="hidden1") hidden2 = fully_connected(hidden1, 100, scope="hidden2") outputs = fully_connected(hidden2, 5, activation_fn=None, scope="outs") max_norm_ops = tf.get_collection("max_norm") [...] with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): for X_batch, y_batch in load_next_batch(): sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) sess.run(max_norm_ops) 

This creates a 3-layer neural network and trains it with maximum normalization at each level (with a threshold of 1.5). I just tried, it seems to work. Hope this helps! Suggestions for improvement are welcome. :)

Notes

This code is based on tf.clip_by_norm() :

 >>> x = tf.constant([0., 0., 3., 4., 30., 40., 300., 400.], shape=(4, 2)) >>> print(x.eval()) [[ 0. 0.] [ 3. 4.] [ 30. 40.] [ 300. 400.]] >>> clip_rows = tf.clip_by_norm(x, clip_norm=10, axes=1) >>> print(clip_rows.eval()) [[ 0. 0. ] [ 3. 4. ] [ 6. 8. ] # clipped! [ 6.00000048 8. ]] # clipped! 

You can also copy columns if you need:

 >>> clip_cols = tf.clip_by_norm(x, clip_norm=350, axes=0) >>> print(clip_cols.eval()) [[ 0. 0. ] [ 3. 3.48245788] [ 30. 34.82457733] [ 300. 348.24578857]] # clipped! 
+4
source

Using the suggestion of Rafał and the TensorFlow implementation of clip_by_norm , here is what I came up with:

 def renorm(x, axis, max_norm): '''Renormalizes the sub-tensors along axis such that they do not exceed norm max_norm.''' # This elaborate dance avoids empty slices, which TF dislikes. rank = tf.rank(x) bigrange = tf.range(-1, rank + 1) dims = tf.slice( tf.concat(0, [tf.slice(bigrange, [0], [1 + axis]), tf.slice(bigrange, [axis + 2], [-1])]), [1], rank - [1]) # Determine which columns need to be renormalized. l2norm_inv = tf.rsqrt(tf.reduce_sum(x * x, dims, keep_dims=True)) scale = max_norm * tf.minimum(l2norm_inv, tf.constant(1.0 / max_norm)) # Broadcast the scalings return tf.mul(scale, x) 

It seems that for 2-dimensional matrices the desired behavior is required and should be generalized to tensors:

 > x = tf.constant([0., 0., 3., 4., 30., 40., 300., 400.], shape=(4, 2)) > print x.eval() [[ 0. 0.] # rows have norms of 0, 5, 50, 500 [ 3. 4.] # cols have norms of ~302, ~402 [ 30. 40.] [ 300. 400.]] > print renorm(x, 0, 10).eval() [[ 0. 0. ] # unaffected [ 3. 4. ] # unaffected [ 5.99999952 7.99999952] # rescaled [ 6.00000048 8.00000095]] # rescaled > print renorm(x, 1, 350).eval() [[ 0. 0. ] # col 0 is unaffected [ 3. 3.48245788] # col 1 is rescaled [ 30. 34.82457733] [ 300. 348.24578857]] 
+2
source

Take a look at clip_by_norm , which does just that. It takes one tensor as input and returns a reduced tensor.

+1
source

All Articles