Implementing multi-disciplinary LSTM in TensorFlow?

I use TensorFlow to predict time series data. So, it looks like I have 50 tags, and I want to know the following 5 possible tags.

As shown in the following figure, I want to make it look like a 4th structure. RNNs

I watched a tutorial demo: Recursive Neural Networks

But I found that it can provide as the fifth in the above image, which is different.

I am wondering which model I can use? I am thinking about seq2seq models, but not sure if this is the right way.

+5
source share
1 answer

You are correct that you can use the seq2seq model. For brevity, I wrote an example of how you can do this in Keras, which also has a Tensorflow backend. I do not run this example, so configuration may be required. If your tags are warmed up, you need to use cross-entropy loss instead.

from keras.models import Model from keras.layers import Input, LSTM, RepeatVector # The input shape is your sequence length and your token embedding size inputs = Input(shape=(seq_len, embedding_size)) # Build a RNN encoder encoder = LSTM(128, return_sequences=False)(inputs) # Repeat the encoding for every input to the decoder encoding_repeat = RepeatVector(5)(encoder) # Pass your (5, 128) encoding to the decoder decoder = LSTM(128, return_sequences=True)(encoding_repeat) # Output each timestep into a fully connected layer sequence_prediction = TimeDistributed(Dense(1, activation='linear'))(decoder) model = Model(inputs, sequence_prediction) model.compile('adam', 'mse') # Or categorical_crossentropy model.fit(X_train, y_train) 
+5
source

All Articles