I assume that using the style of the area Matthew spoke about, you can get the variable:
with tf.variable_scope("embedding_attention_seq2seq"): with tf.variable_scope("RNN"): with tf.variable_scope("EmbeddingWrapper", reuse=True): embedding = vs.get_variable("embedding", [shape], [trainable=])
In addition, I would suggest that you also want to embed the attachments in the decoder, the key (or region) for it would be something like this:
"embedding_attention_seq2seq / embedding_attention_decoder / attachment"
Thanks for your reply, Lukash!
I was wondering what exactly in the code snippet <b>model.vocab[word]</b> means? Just the position of a word in a dictionary?
In this case, it would not be faster to iterate through the dictionary and enter the w2v vectors for the words that exist in the w2v model.
source share