Attention for LSTM
D0nPiano opened this issue · 2 comments
D0nPiano commented
Hello,
I really like your work on using seq2seq for creating SPARQL queries - just one question: Was there a specific reason not to include attention while training? As far as I understood the tensorflow NMT guilde, you would have to add something like --attention=scaled_luong
to the options in your train.sh. Did you evaluate whether it works better with/without attention?
Greetings!