/sentence-similarity-wordnet-sementic

Semantic similarity of sentences using wordnet nltk and basic numpy

Primary LanguagePython

Sentence Similarity Using Wordnet

Calculating the semantic similarity between sentences is a long dealt problem in the area of natural language processing.

The semantic analysis field has a crucial role to play in the research related to the text analytics. The semantic similarity differs as the domain of operation differs. In this paper, we present a methodology which deals with this issue by incorporating semantic similarity and corpus statistics. To calculate the semantic similarity between words and sentences, the proposed method follows an edge-based approach using a lexical database. The methodology can be applied in a variety of domains. The methodology has been tested on both benchmark standards and mean human similarity dataset. When tested on these two datasets, it gives highest correlation value for both word and sentence similarity outperforming other similar models. For word similarity, we obtained Pearson correlation coefficient of 0.8753 and for sentence similarity, the correlation obtained is 0.8794.

THE METHODOLOGY

The methodology considers the text as a sequence of words and deals with all the words in sentences separately according to their semantic and syntactic structure. The information content of the word is related to the frequency of the meaning of the word in a lexical database or a corpus. The method to calculate the semantic similarity between two sentences is divided into following parts: • Word similarity • Sentence similarity • Word order similarity

LIBRARY USED

  1. NLTK
  2. WORDNET CORPUS
  3. NUMPY

TEST DATA AND ANALYSIS

COMING SOON...

Reference

https://arxiv.org/pdf/1802.05667.pdf