[NOTE: this answer has been updated for r1.0 ... but explains legacy_seq2seqinstead tensorflow/tensorflow/contrib/seq2seq/]
The good news is that the seq2seq models presented in the tensor flow are quite complex, including attachments, buckets, an attention mechanism, multi-purpose one-to-many models, etc.
, Python , "" RNN "API" seq2seq, ... , .
, , , , ... , , -level API- Python
seq2seq RNN r1.0:
models/tutorials/rnn/translate/translate.py
... main(), train(), decode(), , ...
models/tutorials/rnn/translate/seq2seq_model.py
... class Seq2SeqModel() RNN- , , ... , , .
tensorflow/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py
... seq2seq . . model_with_buckets(), embedding_attention_seq2seq(), embedding_attention_decoder(), attention_decoder(), sequence_loss() ..
one2many_rnn_seq2seq /, basic_rnn_seq2seq. , , .
tensorflow/tensorflow/contrib/rnn/python/ops/core_rnn.py
... RNN, static_rnn(), , , :
def simple_rnn(cell, inputs, dtype, score):
with variable_scope.variable_scope(scope or "simple_RNN") as varscope1:
if varscope1.caching_device is None:
varscope1.set_caching_device(lambda op: op.device)
batch_size = array_ops.shape(inputs[0])[0]
outputs = []
state = cell.zero_state(batch_size, dtype)
for time, input_t in enumerate(inputs):
if time > 0:
variable_scope.get_variable_scope().reuse_variables()
(output, state) = cell(input_t, state)
outputs.append(output)
return outputs, state