This nation gives an example of how to use convolutional layers together with repeating ones. For example, having the last layers of convolution, such as:
... l_conv4_a = conv_pre(l_pool3, 16, (5, 5), scope="l_conv4_a") l_pool4 = pool(l_conv3_a, scope="l_pool4") l_flatten = flatten(l_pool4, scope="flatten")
and defined cell RNN:
_, shape_state = tf.nn.dynamic_rnn(cell=shape_cell, inputs=tf.expand_dims(batch_norm(x_shape_pl), 2), dtype=tf.float32, scope="shape_rnn")
You can combine both outputs and use them as input for the next layer:
features = tf.concat(concat_dim=1, values=[x_margin_pl, shape_state, x_texture_pl, l_flatten], name="features")
Or you can simply use the output of the CNN layer as an input to the RNN cell:
_, shape_state = tf.nn.dynamic_rnn(cell=shape_cell, inputs=l_flatten, dtype=tf.float32, scope="shape_rnn")
source share