barronalex/Tacotron

Attention RNN

Closed this issue · 1 comments

pravn commented

I am wondering if the attention RNN described in the paper is included in the implementation. If so, could someone point out lines in code where it is used? The reason I ask is because it seems to me that the paper is keeping track of the attention states and using them as input to predict the next timestep's attention. This makes sense as they do a similar thing in the Chorowsky at al paper (this mechanism is also used in the Tacotron2 paper) to keep the attention moving forward. But I could very well be wrong about interpreting this.

pravn commented

Never mind - we just have the regular Bahdanau attention mechanism. I will close.