Fire is widely recognized for its destructive power, making fire prevention crucial. In this repository, we present a highly compatible fire detection model designed specifically for surveillance camera view.
Based model: YOLOv8m
Support two pretrained weights
- FireDet640: trained with input size (640, 640)
- FireDet1280: trained with large input size (1280, 1280)
Model | input size (pixels) |
mAP0.5 |
---|---|---|
FireDet640 | 640 | 0.77 |
FireDet1280 | 1280 | 0.86 |
- Python >= 3.7
- CUDA >= 11.0 (Mine: 11.3)
- PyTorch (Mine: 1.11.0)
- ultralytics YOLOv8
pip install ultralytics
- Conda env to run if using server: yolov8
Download and prepare the dataset in YOLO format. Tools such as Roboflow are highly recommmended if you want to prepare your own fire dataset. The generated dataset should contain a YAML file, for example, train_data.yaml
.
Suppose you have installed ultralytics, other dependencies and prepared training dataset in YOLO format. You can train the model either in two ways:
-
Ultralytics CLI (recommended)
From scratch
yolo detect train data='train_data.yaml' model='yolov8m.pt' epochs=100 imgsz=640 batch=32 device=0,1 workers=8
Resume an interrupted training
yolo detect train resume model='weights/FireDet1280-last.pt'
See train docs for more details.
-
Python script
train.py
For validation, simply use the command-line usage provided by Ultralytics. First, change the val
path in your YAML file to the folder used for validation, for example, ../benchmark/images
and run the following command:
yolo detect val data='data.yaml' model='weights/FireDet1280.pt' device=0,1
Run inference either in two ways:
-
Ultralytics CLI (videos)
yolo detect predict model='weights/FireDet1280.pt' source='assets/case2_house.mp4' show=True
See predict docs for more details.
-
Python script
inference.py
(both images and videos)
First, you need to convert the ONNX file of the model to TensorRT Engine in order to run inference. Follow this repo, from the step Build End2End Engine from ONNX using build.py
, you should get the converted engine file.
After installing TensorRT and OpenCV libraries, navigate to cpp/inference.cpp
, modify the engine path in line 27 const std::string engine_file_path
along with the input size in line 78 cv::Size size = cv::Size{640, 640}
, and everything shall be ready for inference using TensorRT.