Blending source layers and repeating layers in Tensorflow?

Could anyone combine feedback layers and repeating layers in Tensorflow?

For example: input-> conv-> GRU-> linearly> Output

I can imagine that you can define your own cell with levels associated with the forward, and not a single state that can then be stacked using the MultiRNNCell function, something like:

cell = tf.nn.rnn_cell.MultiRNNCell ([conv_cell, GRU_cell, linear_cell])

That would make life so much easier ...

+6
source share
2 answers

You cannot just do the following:

rnnouts, _ = rnn(grucell, inputs) linearout = [tf.matmul(rnnout, weights) + bias for rnnout in rnnouts] 

and etc.

0
source

This nation gives an example of how to use convolutional layers together with repeating ones. For example, having the last layers of convolution, such as:

 ... l_conv4_a = conv_pre(l_pool3, 16, (5, 5), scope="l_conv4_a") l_pool4 = pool(l_conv3_a, scope="l_pool4") l_flatten = flatten(l_pool4, scope="flatten") 

and defined cell RNN:

 _, shape_state = tf.nn.dynamic_rnn(cell=shape_cell, inputs=tf.expand_dims(batch_norm(x_shape_pl), 2), dtype=tf.float32, scope="shape_rnn") 

You can combine both outputs and use them as input for the next layer:

 features = tf.concat(concat_dim=1, values=[x_margin_pl, shape_state, x_texture_pl, l_flatten], name="features") 

Or you can simply use the output of the CNN layer as an input to the RNN cell:

 _, shape_state = tf.nn.dynamic_rnn(cell=shape_cell, inputs=l_flatten, dtype=tf.float32, scope="shape_rnn") 
0
source

All Articles