An AI based Trash can
EcoVision is a computer vision project that focuses on promoting environmental sustainability by automating the process of trash classification. Using the YOLO (You Only Look Once) object detection algorithm, EcoVision enables a smart trash can to classify different types of waste accurately.
The goal of EcoVision is to provide an intelligent solution for waste management by automatically identifying and segregating various types of trash. The system utilizes a YOLO-based object detection model trained on a dataset consisting of six distinct classes: paper, plastic, metal, glass, cardboard, and a general class for all other types of waste.
- Object Detection: EcoVision employs the YOLO algorithm to detect and locate objects in real-time video streams captured by a camera placed above the trash can.
- Trash Classification: Once an object is detected, EcoVision uses its trained model to classify the object into one of the predefined classes, including paper, plastic, metal, glass, cardboard, or a general trash category.
- Segregation and Sorting: Based on the classification results, the trash can automatically separates and sorts the waste into different compartments dedicated to each specific trash type.
- Real-time Feedback: EcoVision provides immediate visual feedback, such as a graphical display or LED indicators, to indicate the category assigned to the detected trash object.
- Python: The project is implemented using Python programming language due to its extensive libraries and frameworks for computer vision tasks.
- YOLO (You Only Look Once): The YOLO object detection algorithm is utilized for accurate and efficient real-time object detection.
- OpenCV: OpenCV library is employed for image and video processing, including capturing, pre-processing, and post-processing of video frames.
- Deep Learning Framework: A popular deep learning framework such as TensorFlow or PyTorch is utilized for training the object detection model.
- Hardware Setup: The system requires a camera for capturing video footage and a microcontroller or single-board computer for running the code and controlling the trash can.
- Python >= 3.8
- Requirements that are specified in the yolov5 Model
To use EcoVision, follow these steps:
- Set up the hardware by placing a camera above the trash can and connecting it to the microcontroller or single-board computer.
- Install the required dependencies and libraries specified in the project's documentation.
- Download or train the YOLO-based object detection model using the provided dataset or your own dataset.
- Configure the system to run in real-time video mode, capturing frames from the camera feed.
- Apply the object detection model to the captured frames to detect and classify the trash objects.
- Implement the segregation and sorting mechanism in the trash can to direct the classified trash objects into separate compartments.
- Provide appropriate visual feedback to the user to indicate the category assigned to the detected trash object.
Use Google colab cloud based training.
- First your google drive using this command
from google.colab import drive drive.mount('/content/drive')
- Download our dataset given from the following drive link and upload it in your google drive.
- Extract the dataset using the command
import zipfile with zipfile.ZipFile(zip_file_path, 'r') as zip_ref: zip_ref.extractall(extract_path)
- Run this
%cd yolov5 %pip install -qr requirements.txt
- Then start training using this command
!python train.py --img 416 --batch 16 --epochs 150 --data {dataset.location}/data.yaml --weights yolov5s.pt --cache
specify the path of the data correctly. You can change the batch and epoch as of your need and accuracy. - Download the trained weights 'best.pt' from
runs/train/exp
to your device. You can use use our already pre trained weights from the following drive link.
- Install python - >=v3.8
- Create one virtual environment named "whatsapp" by run the command
python -m venv yolo
- Run
cd yolo/scripts
- Run
activate.bat
- Run
cd ../..
- Run
pip install -r requirements.txt
- Run the following command
python detct.py --weights best.pt --source 0
to deploy using your system like rasperry pi, Jetson Nano or anything else. - Note: Specify the correct source of the camera.
If you wish to contribute to EcoVision, please follow these guidelines:
- Fork the repository and create a new branch for your contributions.
- Ensure that your code adheres to the project's coding style and follows best practices.
- Clearly document any new features, changes, or bug fixes in the code.
- Test your changes thoroughly and provide relevant test cases when applicable.
- Create a pull request and describe the changes you've made in detail.
We would like to express our gratitude to the open-source community and the developers behind YOLO, OpenCV, and the deep learning