/NLP-Word-Embedding-Model-Comparison

Explore accuracy and efficiency of Word2Vec, FastText, and GloVe models in capturing semantic relationships through synonym tests.

Primary LanguagePython

NLP Word Embedding Models Exploration

Overview

This project delves into the realm of Natural Language Processing (NLP) by scrutinizing the performance of prominent word embedding models like Word2Vec, FastText, and GloVe. The evaluation centers around synonym tests, assessing the models' capabilities in capturing semantic relationships between words.

Datasets

The project leverages diverse datasets to evaluate model performance. Synonym test datasets, such as 'synonym.csv,' challenge the models to discern and generate accurate associations between words.

Data Processing

The datasets undergo preprocessing to optimize them for NLP tasks. Techniques include tokenization, sentence segmentation, and the removal of stopwords and punctuation. These refined datasets serve as the foundation for evaluating the models' efficacy in understanding and representing word semantics.

Execution

The exploration involves loading pre-trained models, such as 'word2vec-google-news-300,' 'fasttext-wiki-news-subwords-300,' and 'glove-wiki-gigaword-300.' These models are then evaluated on synonym tests, and the results are analyzed to unveil insights into their performance.

Results and Analysis

The project generates detailed analysis files, including accuracy metrics and model comparisons, providing a comprehensive view of how each NLP model fares in capturing nuanced language relationships.