Tensor flow: relative to tensor flow functions

I am new to tensor flow. I have the following problem:

input: a list of floats (or a dynamic array. The python list uses a data type) Output: this is a 2-dimensional array of size len(input) × len(input)

Example 1:

Input:

 [1.0, 2.0, 3.0] 

Exit:

 [[0.09003057, 0.24472847, 0.66524096], [0.26894142, 0.73105858, 0.0 ], [1.0, 0.0, 0.0 ]] 

I tried to create a function using a while and evaluate each line independently and concatenate them, but my instructor asked me to learn other ways.

Can you offer me an idea on how to approach this problem?

+7
python tensorflow
source share
2 answers

You can achieve this with the following approach:

  • Repeat the input array to create a square matrix divided into input data.
  • Create a mask with a state from the upper left corner
  • Make softmax with a mask. Please note that we cannot use tf.nn.softmax here because it will give small probabilities to these zeros as well

Here is the TensorFlow code (v0.12.1) that does this:

 def create_softmax(x): x_len = int(x.get_shape()[0]) # create a tiled array # [1, 2, 3] # => # [[1,2,3], [1,2,3], [1,2,3]] x_tiled = tf.tile(tf.expand_dims(x, 0), [x_len, 1]) # get the mask to do element-wise multiplication mask = tf.ones_like(x_tiled) # returns an array of the same size filled with 1 mask = tf.matrix_band_part(mask, 0, -1) # zeros everythings except from the upper triangular part mask = tf.reverse(mask, [False, True]) # reverses the y dimension # compute masked softmax exp = tf.exp(x_tiled) * mask sum_exp = tf.reshape(tf.reduce_sum(exp, reduction_indices=1), (-1, 1)) x_softmax = exp / sum_exp return x_softmax 
+5
source share

This is probably a little late for your class, but hopefully it helps someone.

If your goal is to simply output the len(input)xlen(input) array, you can multiply the 1xlen(input) matrix tensor by your input array after expanding it to len(input)x1 :

 input_ = tf.placeholder(tf.float32, [len(input)]) input_shape = input_.get_shape().as_list() tfvar = tf.Variable(tf.random_normal([1,input_shape[0]], mean=0.0, stddev=.01, dtype=tf.float32)) def function(input_): x = tf.expand_dims(input_, axis=1) # dims = len(input)x1 return tf.matmul(x,tfvar) # mtrx multiplication produces 3x3 mtrx 

This function should be generalized to any 1D input_ tensor and express the square tensor len(input_)xlen(input_) .

If your goal is to train the tensorflow variable to accurately produce the provided output, you can train tfvar with the loss function and optimizer:

 desired_output = tf.constant([[0.09003057, 0.24472847, 0.66524096], [0.26894142, 0.73105858, 0.0 ], [1.0, 0.0, 0.0 ]], dtype=tf.float32) actual_output = function(input_) loss = tf.reduce_mean(tf.square(actual_output-desired_output)) optimizer = tf.train.AdamOptimizer().minimize(loss) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) cost, opt = sess.run([loss, optimizer], feed_dict={input_:input}) 

Please note: if you want a more solid workout, add offset, non-linearity and other layers.

0
source share

All Articles