How to convert a vector to a single vector in Tensorflow

This is a pretty simple question that I just cannot understand. I work with the output tensor of the form [100, 250]. I want to have access to an array of 250 dimensions anywhere in a hundred and change them separately. The tensorflow math tools I found either modify by type or scalar modification on the entire tensor. However, I am trying to make a scalar modification on subsets of the tensor.

EDIT:

Here is the numpy code I would like to create using tensorflow methods:

update = sess.run(y, feed_dict={x: batch_xs})
for i in range(len(update)):
        update[i] = update[i]/np.sqrt(np.sum(np.square(update[i])))
        update[i] = update[i] * magnitude

This loop cycle follows this formula in 250-D instead of 3-D The unit vector formula, which is the first line of the for loop . Then I multiply each unit vector by magnitude to convert it to the desired length.

So, the update here is the numpy [100, 250] dimensional output. I want to convert every 250-dimensional vector into its unit vector. Thus, I can change my length to the size of my choice. Using this numpy code if I run my train_step and pass the update to one of my placeholders

sess.run(train_step, feed_dict={x: batch_xs, prediction: output}) 

it returns an error:

No gradients provided for any variable

This is because I did the math in numpy and moved it back to shadoworflow. Here is a related stack question that has not received an answer.

tf.nn.l2_normalize is very close to what I'm looking for, but it divides by the square root of the maximum sum of squares. While I'm trying to divide each vector by its sum of squares.

Thank!

+4
source share
2

, numpy.
, , , norm [100, 1], x / norm.

x = tf.placeholder(tf.float32, [100, 250])

norm = tf.sqrt(tf.reduce_sum(tf.square(x), 1, keep_dims=True))

res = x / norm
+4

tf.norm . (tf version == 1.4 .)

:

  import  tensorflow as tf
  a = tf.random_uniform((3, 4))
  b = tf.norm(a, keep_dims=True)
  c = tf.norm(a, axis=1, keep_dims=True)
  d = a / c
  e = a / tf.sqrt(tf.reduce_sum(tf.square(a), axis=1, keep_dims=True) + 1e-8)
  f = a / tf.sqrt(tf.reduce_sum(tf.square(a), axis=1, keep_dims=True))
  g = tf.sqrt(tf.reduce_sum(tf.square(a), axis=1, keep_dims=True))
  with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    a_eval, b_eval, c_eval, d_eval, e_eval, f_eval, g_eval = sess.run([a, b, c, d, e, f, g])
    print(a_eval)
    print(b_eval)
    print(c_eval)
    print(d_eval)
    print(e_eval)
    print(f_eval)
    print(g_eval)

:

[[ 0.29823065  0.76523042  0.40478575  0.44568062]
 [ 0.0222317   0.12344956  0.39582515  0.66143286]
 [ 0.01351094  0.38285756  0.46898723  0.34417391]]
[[ 1.4601624]]
[[ 1.01833284]
 [ 0.78096414]
 [ 0.6965394 ]]
[[ 0.29286167  0.75145411  0.39749849  0.43765712]
 [ 0.02846699  0.15807328  0.50684166  0.84694397]
 [ 0.01939724  0.54965669  0.6733104   0.49411979]]
[[ 0.29286167  0.75145411  0.39749849  0.43765712]
 [ 0.02846699  0.15807328  0.50684166  0.84694397]
 [ 0.01939724  0.54965669  0.6733104   0.49411979]]
[[ 0.29286167  0.75145411  0.39749849  0.43765712]
 [ 0.02846699  0.15807328  0.50684166  0.84694397]
 [ 0.01939724  0.54965669  0.6733104   0.49411979]]
[[ 1.01833284]
 [ 0.78096414]
 [ 0.6965394 ]]

, a / tf.norm(a, axis=1, keep_dims=True) a / tf.sqrt(tf.reduce_sum(tf.square(a), axis=1, keep_dims=True) + 1e-8).

a / tf.sqrt(tf.reduce_sum(tf.square(a), axis=1, keep_dims=True) + 1e-8) , .

0

All Articles