kli017 opened this issue 3 years ago · 1 comments
Hello, in transformer.py I found that the pos_enc was initialized in the encoder but it was not used ind the forward?
They did not use positional encoding, as described in paper. [2] Yusuke Fujita, Naoyuki Kanda, Shota Horiguchi, Yawen Xue, Kenji Nagamatsu, Shinji Watanabe, " End-to-End Neural Speaker Diarization with Self-attention," Proc. ASRU, pp. 296-303, 2019