/Word-Embedding-Eval

APSIPA SIP 2019: Evaluating Word Embedding Models: Methods and Experimental Results

BSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

Word-Embedding-Eval

Some pointer for paper: Evaluating Word Embedding Models: Methods and Experimental Results

As someitmes been asked for the information for doing all experiments, we provide a list of pointers for resources used in this paper.

Since the experiments are conducted with different programming languages and different environments. The links are provided below, which takes soem time working them out.

= = = = = =

The following are a comprehensive link for most of the programs we have used:

  1. Training Data: http://nlp.stanford.edu/data/WestburyLab.wikicorp.201004.txt.bz2
  2. word2vec: https://code.google.com/archive/p/word2vec/
  3. GloVe:https://nlp.stanford.edu/projects/glove/
  4. FastText: https://github.com/facebookresearch/fastText
  5. ngram2vec: https://github.com/zhezhaoa/ngram2vec
  6. dict2vec: https://github.com/tca19/dict2vec
  7. Word Similarity Evaluation: https://github.com/BinWang28/PVN-Post-Processing-of-word-representation-via-variance-normalization, https://github.com/BinWang28/EvalRank-Embedding-Evaluation
  8. QVEC evaluation: https://github.com/ytsvetko/qvec
  9. For Neural Machine Translation, you can get it from GitHub site: (https://github.com/OpenNMT/OpenNMT-py). We use perplexity evaluation only since it takes too much time to fully train an NMT model. And Europarl v8 dataset (French - English) for training.
  10. For Experiments on POS, Chunking and NER: (https://github.com/billy322/RepEval-2016).

= = = = = =