A questions about decoder in seq2seq-torch
acm5656 opened this issue · 1 comments
acm5656 commented
nlp-tutorial/4-1.Seq2Seq/Seq2Seq-Torch.py
Line 92 in 6e171b9
Hi,I‘m a nlp rookie.I want to ask you a question.I read the seq2seq's paper,which use t-1 output as the t input in decoder. Your code in this line use 'SPPPPP' as the decoder input.So,is this way harm to the result?
If you see this issues, please answer me in your free time.
Although my english is poor, I still want to express my gratitude to you.
Angry-Echo commented
Hi,after several years, you must understand the code. So i want to ask you a question. I think the code is different from the paper "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation". In the paper, the summary of the encoder is used in every cell of the decoder whatever hidden or output. However, I did not see it show in the code. Is that right?