Clearify using num_layers as n in LSTM implementation
phisad opened this issue · 0 comments
Hello,
I try to re-implement your paper in Keras. Now, I'm struggling with your LSTM implementation.
You use num_layers as n for the LSTM initialization, but the num_layers should be the depth of the LSTM. Nevertheless, in the LSTM implementation it seems to be used as the number of timesteps L. Is this true?
HieCoAttenVQA/misc/ques_level.lua
Line 18 in 82b0bb0
Line 18 in 82b0bb0
Furthermore, there is createClones which creates multiple weights for each timestep as it seems. Is this supposed to be wanted as an LSTM should share the same weights through time or a Bug?
HieCoAttenVQA/misc/ques_level.lua
Line 52 in 82b0bb0