This repository contains a YOLOv8-based model for detecting personal protective equipment (PPE) using ONNX for CPU inference and TensorRT for GPU inference, aimed at speeding up inference time.
-
Clone the GitHub repository.
-
Install required dependencies:
pip install -r requirements.txt
If your annotations are in Pascal VOC format and you need them in YOLO format, you can use the provided script:
python pascaltoVOC_to_YOLO.py "path_to_input_folder" "path_to_output_folder"
To split your dataset into training and validation sets, use the train_validation_split.py file
If you have a custom dataset and want to train the model:
- Use the provided Jupyter notebook.
- Replace the path of the configuration file and the model with your custom paths.
- Download the weights for the detection model and upload them in weights folder. Use ONNX for faster CPU inference and tensorRT for faster GPU inference
To perform inference using the trained model:
-
If using default models and folder paths, simply run:
python inference.py
Place the images in the test_inputs
folder. Results will be available at test_outputs
by default.
- If using custom settings, run:
python inference.py -i "path_to_input_directory" -o "path_to_outputs_directory" --person_model "path_to_person_detection_model" --ppe_model "path_to_ppe_detection_model"
This project is licensed under the MIT License.