graykode/nlp-tutorial

A questions about decoder in seq2seq-torch

acm5656 opened this issue · 1 comments

input_batch, output_batch, _ = make_batch([[word, 'P' * len(word)]])

Hi,I‘m a nlp rookie.I want to ask you a question.I read the seq2seq's paper,which use t-1 output as the t input in decoder. Your code in this line use 'SPPPPP' as the decoder input.So,is this way harm to the result?
If you see this issues, please answer me in your free time.
Although my english is poor, I still want to express my gratitude to you.
image

Hi,after several years, you must understand the code. So i want to ask you a question. I think the code is different from the paper "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation". In the paper, the summary of the encoder is used in every cell of the decoder whatever hidden or output. However, I did not see it show in the code. Is that right?