jojonki/word2vec-pytorch

Skipgram code is wrong, please refer to the original paper.

Opened this issue · 0 comments

The original word2vec use two embeddings for each word, input embedding and output embedding. Your implementation only uses one.