How is the input tensor structured for the TensorFlow tf.nn.dynamic_rnn operator?

I am trying to write a language model using word embeddings and recursive neural networks in TensorFlow 0.9.0 using a graph operation tf.nn.dynamic_rnn, but I don't understand how the tensor is structured input.

Say I have a corpse of n words. I insert every word into a vector of length e, and I want my RNN to unfold up to t time steps. Assuming I'm using the time_major = Falsedefault option, what shape would my tensor have input [batch_size, max_time, input_size]?

Perhaps a specific tiny example will make this question clearer. Say I have a corpus of n = 8 words that looks like this.

1, 2, 3, 3, 2, 1, 1, 2

Let's say I insert it into a vector of size e = 3 with embeddings 1 → [10, 10, 10], 2 → [20, 20, 20] and 3 → [30, 30, 30], what would the tensor look like input?

I read the TensorFlow Recurrent Neural Network tutorial , but not using tf.nn.dynamic_rnn. I also read the documentation for tf.nn.dynamic_rnn, but find it confusing. In particular, I'm not sure what "max_time" and "input_size" mean here.

Can someone give a tensor form in inputterms of n, t and e and / or an example of what this tensor will look like initialized data from a small case I describing?

TensorFlow 0.9.0, Python 3.5.1, OS X 10.11.5

+4
source share
1 answer

batch_size = 1, . , max_time - n=8, input_size - , e=3. input, [1, 8, 3]. batch_major, ( ) 1. , , n=6, , 8 ( ), a inputs [2, 8, 3].

+3

All Articles