harvardnlp/seq2seq-attn

input feed

christopher5106 opened this issue · 1 comments

Hi,

I sounds like previous context is fed when input_feed==1

https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L27-L28

but the all context vector is not used when input_feed == 0

https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L82-L85

Is that desirable ?

Thanks

OK, all context vector is used later for the attention mecanism.