/pytorch-YOLOv4

Minimal PyTorch implementation of YOLOv4

Primary LanguagePythonApache License 2.0Apache-2.0

Pytorch-YOLOv4

A minimal PyTorch implementation of YOLOv4.

├── README.md
├── dataset.py       dataset
├── demo.py          demo to run pytorch --> tool/darknet2pytorch
├── darknet2onnx.py  tool to convert into onnx --> tool/darknet2pytorch
├── demo_onnx.py     demo to run the converted onnx model
├── models.py        model for pytorch
├── train.py         train models.py
├── cfg.py           cfg.py for train
├── cfg              cfg --> darknet2pytorch
├── data            
├── weight           --> darknet2pytorch
├── tool
│   ├── camera.py           a demo camera
│   ├── coco_annotatin.py       coco dataset generator
│   ├── config.py
│   ├── darknet2pytorch.py
│   ├── region_loss.py
│   ├── utils.py
│   └── yolo_layer.py

image

0. Weight

0.1 darkent

0.2 pytorch

you can use darknet2pytorch to convert it yourself, or download my converted model.

1. Train

use yolov4 to train your own data

  1. Download weight

  2. Transform data

    For coco dataset,you can use tool/coco_annotatin.py.

    # train.txt
    image_path1 x1,y1,x2,y2,id x1,y1,x2,y2,id x1,y1,x2,y2,id ...
    image_path2 x1,y1,x2,y2,id x1,y1,x2,y2,id x1,y1,x2,y2,id ...
    ...
    ...
    
  3. Train

    you can set parameters in cfg.py.

     python train.py -g [GPU_ID] -dir [Dataset direction] ...
    

2. Inference

python demo.py <cfgFile> <weightFile> <imgFile>

3. Darknet2ONNX (Evolving)

  • Pytorch version Recommended: 1.4.0

  • Install onnxruntime

    pip install onnxruntime
  • Run python script to generate onnx model and run the demo

    python demo_onnx.py <cfgFile> <weightFile> <imageFile> <batchSize>

    This script will generate 2 onnx models.

    • One is for running the demo (batch_size=1)
    • The other one is what you want to generate (batch_size=batchSize)

4. ONNX2TensorRT (Evolving)

  • TensorRT version Recommended: 7.0, 7.1

  • Run the following command to convert VOLOv4 onnx model into TensorRT engine

    trtexec --onnx=<onnx_file> --explicitBatch --saveEngine=<tensorRT_engine_file> --workspace=<size_in_megabytes> --fp16
    • Note: If you want to use int8 mode in conversion, extra int8 calibration is needed.
  • Run the demo (this demo here only works when batchSize=1)

    python demo_trt.py <tensorRT_engine_file> <input_image> <input_H> <input_W>
    • Note1: input_H and input_W should agree with the input size in the original darknet cfg file as well as the latter onnx file.
    • Note2: extra NMS operations are needed for the tensorRT output. This demo uses TianXiaomo's NMS code from tool/utils.py.

5. ONNX2Tensorflow

Reference:

@article{yolov4,
  title={YOLOv4: YOLOv4: Optimal Speed and Accuracy of Object Detection},
  author={Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao},
  journal = {arXiv},
  year={2020}
}