sunzhuntu/Recurrent-Knowledge-Graph-Embedding

How do you implement attention-gated hidden layer

Opened this issue · 0 comments

I find there only exists one file named "LSTMTagger.py" which implements RKGE model. But in this file, the output of LSTM layer is used directly to form the final h which is the input of maxpooling layer. Maybe I omit something important. So could you tell me how you implement attention-gated hidden layer?