Run train several times in tensor flow

I have quite large package sizes on which I would like to take several gradient steps. Although I could easily do this with the python for loop, I think there might be a more efficient method that does not involve passing data to gpu at each iteration. I tried several times to put the train on the selection list, but I'm not sure if it starts more than once (the execution time is exactly the same).

+4
source share
2 answers

If you have a packet with a variable size, then the variable is not suitable for saving it, and instead you can save this data between calls runusing constant tensors. Here is an example of a toy

 
t = tf.int32
params = tf.Variable(tf.ones_initializer((), dtype=dt))
data_batches = [[1], [2, 3], [4, 5, 6]]

# op that uploads data to TF and saves it as a persistent Tensor
data_saver_placeholder = tf.placeholder(dt)
tensor_handle_op = tf.get_session_handle(data_saver_placeholder)

data_placeholder, data = tf.get_session_tensor(dt)
train_op = tf.assign_add(params, tf.reduce_prod(data)) 
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)

for batch in data_batches:
    # upload tensor to TF runtime and save its handle
    tensor_handle = sess.run(tensor_handle_op, feed_dict={data_saver_placeholder: batch})
    # run train op several times reusing same data
    for i in range(3):
        sess.run(train_op, feed_dict={data_placeholder: tensor_handle.handle})


assert sess.run(params) == 382
+4

sess.run([myop,myop]), myop .

op, Python, . group op, ..

sess.run(tf.group(myop))
sess.run(tf.group(myop))

, , group op (, 10-100 a > 10k node),

myop_nooutput = tf.group(myop)
sess.run(myop_nooutput)
sess.run(myop_nooutput)
+2

All Articles