This repo has codes for following functions:
- Vectorize sentences using standard word embedding such as GloVe and word2vec.
- Build word2vec from scratch using negative sampling and skip-gram
- Fine-tune large pre-trained language models using scientific data.
- Fine-tune BERT model.
- Text similarity search engine based on cosine similarity and euclidean distances.
The implmentation for using pre-trained word embedding is minor modification from Natural language processing 2 by Lazyprogrammer
The implmentation of word2vec (skip-gram with negative sampling) is minor modification from deep_learning_NLP by Tixierae
Information retrievel using word2vec by Abhishek Sharma.
First, clone this repository and open a terminal inside the folder.
Install pretrained vectors:
word2vec
wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
gunzip GoogleNews-vectors-negative300.bin.gz
GloVe
wget -c https://nlp.stanford.edu/data/glove.6B.zip
Install dependencies:
pip install -r requirements.txt
Run the app:
python app.py