graykode/nlp-tutorial

Seq2Seq pytorch

DonghyungKo opened this issue · 2 comments

Hi
thanks for sharing your codes.

I've had read your seq2seq implementation and I was wondering about the RNN Encode-Decode model.

in the paper, 'Learning Phrase Representations using RNN Encoder–Decoder
for Statistical Machine Translation'

They say

image

proposed gating unit
image

and I couldn't find the new hidden-state activation function in your code.

Do you have any plan to add the proposed activation process?
or is it okay to just skip the parts?

thank you so much in advance

Yes. but, Contribution is always open

thank you

I was just wondering whether it matters or not

straight-forward code btw, really nice work
thanks