This project focuses on building a machine learning model to identify toxic comments in text. It provides a user-friendly web-based interface that allows users to input text comments, and the model will classify them as either toxic or non-toxic.
This project comprises the following components:
-
Model Training: The machine learning model for comment toxicity detection is trained using labeled data containing both toxic and non-toxic comments.
-
Web Application: A Flask-based web application (found in
app.py
) serves as the user interface for entering comments and obtaining real-time predictions from the trained model.
Before running the application, make sure you have the following prerequisites installed:
- Python 3.7 or higher
- Flask (Python web framework)
- Scikit-learn (machine learning library)
- NLTK (Natural Language Toolkit)
- NumPy (numerical computing library)
You can install the required Python packages using pip:
pip install Flask scikit-learn nltk numpy
Follow these steps to install and use the web application:
-
Clone this repository to your local machine:
git clone https://github.com/Surajrs812/Toxic-comment-classifier.git
-
Navigate to the project directory:
cd Toxic-comment-classifier
-
Run the Flask application:
python app.py
-
Open a web browser and go to http://127.0.0.1:5000 to access the web application.
-
Visit the application URL (http://127.0.0.1:5000).
-
You'll see a simple web page with an input field for comments.
-
Enter a text comment and click the "Check Toxicity" button.
-
The web application will display the classification result as either toxic or non-toxic.
The project directory is organized as follows:
app.py
: The Flask web application.templates
: HTML templates for the web application.Model Training
: Directory to store the trained comment toxicity model.
This project was created by Suraj R S.