/NLP_word_embeddings

In this exercise, I will add the use of word embedding, or vector representations of words, to the experiments. I will use the trained models Word2Vec and Glove. Word2vec attempts to capture the co-occurrence of words in one window at a time, while Glove is based on the co-occurrence of words throughout the corpus.

Primary LanguageJupyter Notebook

This repository is not active