dheerajrajagopal/SelfExplain

Missing normalization based on phrase length?

akshaylive opened this issue · 1 comments

According to the paper (section 2.2), constituent word representations are taken to be the average of token representations of the phrase (non-terminal) tokens.

The code actually does a batch matrix multiplication, and therefore achieves the sum of hidden token representations. This may affects both the magnitude and the direction of the phrase level representation after applying the activation.

Am I missing something?

I just realized that this issue has been asked before here. It would be good to add this to a disclaimer on README.