/Toxic-Comment-Classification

Built a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate.

Primary LanguageJupyter Notebook

Toxic-Comment-Classification

A multi-label classification problem to promote good online conversations

Problem

The problem originates from the various online forums, where people participate and make commments. The hosting organizations are constantly on the lookout for abusive, insulting or even hate-based comments to ensure that the conversations on their forums stay civil.

The task is to build a model which could predict whether a comment is clean, provided various categories into which the comments are divded into : toxic,severe toxic, obscene, threat, insult, and identity hate.

Installation Requirements

  • scikit-learn
  • scipy
  • numpy
  • pandas
  • stop_words
  • matplotlib
  • nltk