Tensor flow is minimized only for some elements of the variable

Is it possible to minimize the loss function by changing only some elements of the variable? In other words, if I have a variable of Xlength 2, how can I minimize the loss function by changing X[0]and keeping the constant X[1]?

Hope this code I tried will describe my problem:

import tensorflow as tf
import tensorflow.contrib.opt as opt

X = tf.Variable([1.0, 2.0])
X0 = tf.Variable([3.0])

Y = tf.constant([2.0, -3.0])

scatter = tf.scatter_update(X, [0], X0)

with tf.control_dependencies([scatter]):
    loss = tf.reduce_sum(tf.squared_difference(X, Y))

opt = opt.ScipyOptimizerInterface(loss, [X0])

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    opt.minimize(sess)

    print("X: {}".format(X.eval()))
    print("X0: {}".format(X0.eval()))

which outputs:

INFO:tensorflow:Optimization terminated with:
  Message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
  Objective function value: 26.000000
  Number of iterations: 0
  Number of functions evaluations: 1
X: [3. 2.]
X0: [3.]

where I would like to find the optimal value X0 = 2and thusX = [2, 2]

change

The motivation for this: I would like to import the prepared graph / model, and then adjust the various elements of some variables depending on some new data that I have.

+10
4

, :

import tensorflow as tf
import tensorflow.contrib.opt as opt

X = tf.Variable([1.0, 2.0])

part_X = tf.scatter_nd([[0]], [X[0]], [2])

X_2 = part_X + tf.stop_gradient(-part_X + X)

Y = tf.constant([2.0, -3.0])

loss = tf.reduce_sum(tf.squared_difference(X_2, Y))

opt = opt.ScipyOptimizerInterface(loss, [X])

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    opt.minimize(sess)

    print("X: {}".format(X.eval()))

part_X , part_X + tf.stop_gradient(-part_X + X) , X. part_X + tf.stop_gradient(-part_X + X) X , part_X - part_X 0. tf.stop_gradient .

+5

, SciPy, tf.train.Optimizer , - , compute_gradients, , apply_gradients,   minimize (, , ).

import tensorflow as tf

X = tf.Variable([3.0, 2.0])
# Select updatable parameters
X_mask = tf.constant([True, False], dtype=tf.bool)
Y = tf.constant([2.0, -3.0])
loss = tf.reduce_sum(tf.squared_difference(X, Y))
opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)
# Get gradients and mask them
((X_grad, _),) = opt.compute_gradients(loss, var_list=[X])
X_grad_masked = X_grad * tf.cast(X_mask, dtype=X_grad.dtype)
# Apply masked gradients
train_step = opt.apply_gradients([(X_grad_masked, X)])

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    for i in range(10):
        _, X_val = sess.run([train_step, X])
        print("Step {}: X = {}".format(i, X_val))
    print("Final X = {}".format(X.eval()))

:

Step 0: X = [ 2.79999995  2.        ]
Step 1: X = [ 2.63999987  2.        ]
Step 2: X = [ 2.51199985  2.        ]
Step 3: X = [ 2.40959978  2.        ]
Step 4: X = [ 2.32767987  2.        ]
Step 5: X = [ 2.26214385  2.        ]
Step 6: X = [ 2.20971513  2.        ]
Step 7: X = [ 2.16777205  2.        ]
Step 8: X = [ 2.13421774  2.        ]
Step 9: X = [ 2.10737419  2.        ]
Final X = [ 2.10737419  2.        ]
+2

, var_list minimize.

trainable_var = X[0]
train_op = tf.train.GradientDescentOptimizer(learning_rate=1e-3).minimize(loss, var_list=[trainable_var])

, tensorflow GraphKeys.TRAINABLE_VARIABLES, , :

all_trainable_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)

, var_list.

, - , , grads = tf.gradients(loss, var_list), , , tf.train.GradientDescentOptimizer(...).apply_gradients(grads_and_vars_as_list_of_tuples). .

, . SGD 1-4 1-2 . , - , , .

+1

The answer from Oren in the second link below calls a function (defined in the first link) that uses the hot matrix of logical parameters for optimization and the parameter tensor. It uses stop_gradient and works like a keychain for the neural network that I developed.

Updating only part of the word embedding matrix in Tensorflow

https://github.com/tensorflow/tensorflow/issues/9162

0
source

All Articles