/FireDet-YOLOv8

In this repo, we introduce a real-time fire detection model that is highly compatible to surveillance camera view.

Primary LanguageC++

Real-Time Fire Detection from Surveillance Camera CCTV

python pytorch cuda

Fire is widely recognized for its destructive power, making fire prevention crucial. In this repository, we present a highly compatible fire detection model designed specifically for surveillance camera view.

Model: FireDet

Based model: YOLOv8m

Support two pretrained weights

Model input size
(pixels)
mAP0.5
FireDet640 640 0.77
FireDet1280 1280 0.86

Requirements

  • Python >= 3.7
  • CUDA >= 11.0 (Mine: 11.3)
  • PyTorch (Mine: 1.11.0)
  • ultralytics YOLOv8
    pip install ultralytics
  • Conda env to run if using server: yolov8

Dataset preparation

Download and prepare the dataset in YOLO format. Tools such as Roboflow are highly recommmended if you want to prepare your own fire dataset. The generated dataset should contain a YAML file, for example, train_data.yaml.

Training

Suppose you have installed ultralytics, other dependencies and prepared training dataset in YOLO format. You can train the model either in two ways:

  1. Ultralytics CLI (recommended)

    From scratch

    yolo detect train data='train_data.yaml' model='yolov8m.pt' epochs=100 imgsz=640 batch=32 device=0,1 workers=8

    Resume an interrupted training

    yolo detect train resume model='weights/FireDet1280-last.pt'

    See train docs for more details.

  2. Python script train.py

Validation

For validation, simply use the command-line usage provided by Ultralytics. First, change the val path in your YAML file to the folder used for validation, for example, ../benchmark/images and run the following command:

yolo detect val data='data.yaml' model='weights/FireDet1280.pt' device=0,1

Inference in Python

Run inference either in two ways:

  1. Ultralytics CLI (videos)

    yolo detect predict model='weights/FireDet1280.pt' source='assets/case2_house.mp4' show=True

    See predict docs for more details.

  2. Python script inference.py (both images and videos)

Inference in C++

First, you need to convert the ONNX file of the model to TensorRT Engine in order to run inference. Follow this repo, from the step Build End2End Engine from ONNX using build.py, you should get the converted engine file.

After installing TensorRT and OpenCV libraries, navigate to cpp/inference.cpp, modify the engine path in line 27 const std::string engine_file_path along with the input size in line 78 cv::Size size = cv::Size{640, 640}, and everything shall be ready for inference using TensorRT.