TL DR; Save all parameters in a list and add their norm L ^ n to the objective function before the gradient for optimization
1) In the function where you define the output
net = [v for v in tf.trainable_variables()] return *, net
2) Add the norm L ^ n to the cost and calculate the gradient from the cost
weight_reg = tf.add_n([0.001 * tf.nn.l2_loss(var) for var in net]) #L2 cost = Your original objective w/o regulariser + weight_reg param_gradients = tf.gradients(cost, net) optimiser = tf.train.AdamOptimizer(0.001).apply_gradients(zip(param_gradients, net))
3) Run the optimizer, if you want, through
_ = sess.run(optimiser, feed_dict={input_var: data})
source share