/Toxicity-Classification

Build a model to identify toxic statements and reduce bias in classification

Primary LanguageJupyter Notebook

Toxicity-Classification

Build a model to identify toxic statements and reduce bias in classification

Dataset obtained from Kaggle competition: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview

Language Used: Python 3

The project contains a Jupyter Notebook containing 3 models to solve the Toxicity Classification problem:

  1. Logistic Regression
  2. Random Forests
  3. Gradient Boosted Machines

Model is achieving accuracy of ~92% in all the 3 models.