Tensor flow: how to change the value in the tensor

Since I need to write some preliminary processes for the data before using Tensorflow to train the models, I need to make some changes to tensor . However, I have no idea how to change the values โ€‹โ€‹in tensor as if using numpy .

The best way to do this is to directly modify tensor . However, this seems impossible in the current version of Tensorflow. An alternative way is to change tensor to ndarray for the process, and then use tf.convert_to_tensor to return back.

The key is how to change tensor to ndarray .
1) tf.contrib.util.make_ndarray(tensor) : https://www.tensorflow.org/versions/r0.8/api_docs/python/contrib.util.html#make_ndarray
This seems like the easiest way according to the document, but I cannot find this function in the current version of Tensorflow. Secondly, the input is TensorProto and not tensor .
2) Use a.eval() to copy another a ndarray
However, it only works when using tf.InteractiveSession() in a notebook.

A simple case with codes is shown below. The purpose of this code is to make tfc have the same output as npc after the process.

HINT
You must tfc that tfc and npc independent of each other. This corresponds to a situation where the first extracted training data is in tensor format using tf.placeholder() .


Source

 import numpy as np import tensorflow as tf tf.InteractiveSession() tfc = tf.constant([[1.,2.],[3.,4.]]) npc = np.array([[1.,2.],[3.,4.]]) row = np.array([[.1,.2]]) print('tfc:\n', tfc.eval()) print('npc:\n', npc) for i in range(2): for j in range(2): npc[i,j] += row[0,j] print('modified tfc:\n', tfc.eval()) print('modified npc:\n', npc) 

Exit:

CTF:
[[one. 2.]
[3. 4.]]
NPC:
[[one. 2.]
[3. 4.]]
modified TFC:
[[one. 2.]
[3. 4.]]
modified NPC:
[[1.1 2.2]
[3.1 4.2]]

+10
source share
2 answers

Use assign and eval (or sess.run) assignment:

 import numpy as np import tensorflow as tf npc = np.array([[1.,2.],[3.,4.]]) tfc = tf.Variable(npc) # Use variable row = np.array([[.1,.2]]) with tf.Session() as sess: tf.initialize_all_variables().run() # need to initialize all variables print('tfc:\n', tfc.eval()) print('npc:\n', npc) for i in range(2): for j in range(2): npc[i,j] += row[0,j] tfc.assign(npc).eval() # assign_sub/assign_add is also available. print('modified tfc:\n', tfc.eval()) print('modified npc:\n', npc) 

It outputs:

 tfc: [[ 1. 2.] [ 3. 4.]] npc: [[ 1. 2.] [ 3. 4.]] modified tfc: [[ 1.1 2.2] [ 3.1 4.2]] modified npc: [[ 1.1 2.2] [ 3.1 4.2]] 
+9
source

I struggled with this for a while. The resulting response will add assign operations to the chart (and therefore will unnecessarily increase the size of .meta if you subsequently save the breakpoint). The best solution is to use tf.keras.backend.set_value . You can imitate this using raw tensor flow by doing:

  for x, value in zip(tf.global_variables(), values_npfmt): if hasattr(x, '_assign_placeholder'): assign_placeholder = x._assign_placeholder assign_op = x._assign_op else: assign_placeholder = array_ops.placeholder(tf_dtype, shape=value.shape) assign_op = x.assign(assign_placeholder) x._assign_placeholder = assign_placeholder x._assign_op = assign_op get_session().run(assign_op, feed_dict={assign_placeholder: value}) 
0
source

All Articles