/PyTorch_YOLOv4

PyTorch implementation of YOLOv4

Primary LanguageJupyter Notebook

YOLOv4

This is PyTorch implementation of YOLOv4 which is based on ultralytics/yolov3.

development log

Expand
  • 2021-10-31 - support RS loss, aLRP loss, AP loss.
  • 2021-10-30 - support alpha IoU.
  • 2021-10-20 - design resolution calibration methods.
  • 2021-10-15 - support joint detection, instance segmentation, and semantic segmentation. seg-yolo
  • 2021-10-13 - design ratio yolo.
  • 2021-09-22 - pytorch 1.9 compatibility.
  • 2021-09-21 - support DIM.
  • 2021-09-16 - support Dynamic Head.
  • 2021-08-28 - design domain adaptive training.
  • 2021-08-22 - design re-balance models.
  • 2021-08-21 - support simOTA.
  • 2021-08-14 - design approximation-based methods.
  • 2021-07-27 - design new decoders.
  • 2021-07-22 - support 1) decoupled head, 2) anchor-free, and 3) multi positives in yolox.
  • 2021-07-10 - design distribution-based implicit modeling.
  • 2021-07-06 - support outlooker attention. volo
  • 2021-07-06 - design self emsemble training method.
  • 2021-06-23 - design cross multi-stage correlation module.
  • 2021-06-18 - design cross stage cross correlation module.
  • 2021-06-17 - support cross correlation module. ccn
  • 2021-06-17 - support attention modules. cbam saan
  • 2021-04-20 - support swin transformer. swin
  • 2021-03-16 - design new stem layers.
  • 2021-03-13 - design implicit modeling. nn mf lc
  • 2021-01-26 - support vision transformer. tr
  • 2021-01-26 - design mask objectness.
  • 2021-01-25 - design rotate augmentation.
  • 2021-01-23 - design collage augmentation.
  • 2021-01-22 - support VoVNet, VoVNetv2.
  • 2021-01-22 - support EIoU.
  • 2021-01-19 - support instance segmentation. mask-yolo
  • 2021-01-17 - support anchor-free-based methods. center-yolo
  • 2021-01-14 - support joint detection and classification. classify-yolo
  • 2020-01-02 - design new PRN and CSP-based models.
  • 2020-12-22 - support transfer learning.
  • 2020-12-18 - support non-local series self-attention blocks. gc dnl
  • 2020-12-16 - support down-sampling blocks in cspnet paper. down-c down-d
  • 2020-12-03 - support imitation learning.
  • 2020-12-02 - support squeeze and excitation.
  • 2020-11-26 - support multi-class multi-anchor joint detection and embedding.
  • 2020-11-25 - support joint detection and embedding. track-yolo
  • 2020-11-23 - support teacher-student learning.
  • 2020-11-17 - pytorch 1.7 compatibility.
  • 2020-11-06 - support inference with initial weights.
  • 2020-10-21 - fully supported by darknet.
  • 2020-09-18 - design fine-tune methods.
  • 2020-08-29 - support deformable kernel.
  • 2020-08-25 - pytorch 1.6 compatibility.
  • 2020-08-24 - support channel last training/testing.
  • 2020-08-16 - design CSPPRN.
  • 2020-08-15 - design deeper model. csp-p6-mish
  • 2020-08-11 - support HarDNet. hard39-pacsp hard68-pacsp hard85-pacsp
  • 2020-08-10 - add DDP training.
  • 2020-08-06 - support DCN, DCNv2. yolov4-dcn
  • 2020-08-01 - add pytorch hub.
  • 2020-07-31 - support ResNet, ResNeXt, CSPResNet, CSPResNeXt. r50-pacsp x50-pacsp cspr50-pacsp cspx50-pacsp
  • 2020-07-28 - support SAM. yolov4-pacsp-sam
  • 2020-07-24 - update api.
  • 2020-07-23 - support CUDA accelerated Mish activation function.
  • 2020-07-19 - support and training tiny YOLOv4. yolov4-tiny
  • 2020-07-15 - design and training conditional YOLOv4. yolov4-pacsp-conditional
  • 2020-07-13 - support MixUp data augmentation.
  • 2020-07-03 - design new stem layers.
  • 2020-06-16 - support floating16 of GPU inference.
  • 2020-06-14 - convert .pt to .weights for darknet fine-tuning.
  • 2020-06-13 - update multi-scale training strategy.
  • 2020-06-12 - design scaled YOLOv4 follow ultralytics. yolov4-pacsp-s yolov4-pacsp-m yolov4-pacsp-l yolov4-pacsp-x
  • 2020-06-07 - design scaling methods for CSP-based models. yolov4-pacsp-25 yolov4-pacsp-75
  • 2020-06-03 - update COCO2014 to COCO2017.
  • 2020-05-30 - update FPN neck to CSPFPN. yolov4-yocsp yolov4-yocsp-mish
  • 2020-05-24 - update neck of YOLOv4 to CSPPAN. yolov4-pacsp yolov4-pacsp-mish
  • 2020-05-15 - training YOLOv4 with Mish activation function. yolov4-yospp-mish yolov4-paspp-mish
  • 2020-05-08 - design and training YOLOv4 with FPN neck. yolov4-yospp
  • 2020-05-01 - training YOLOv4 with Leaky activation function using PyTorch. yolov4-paspp PAN

Pretrained Models & Comparison

Model Test Size APtest AP50test AP75test APStest APMtest APLtest cfg weights
YOLOv4 640 50.0% 68.4% 54.7% 30.5% 54.3% 63.3% cfg weights
YOLOv4pacsp-s 640 39.0% 57.8% 42.4% 20.6% 42.6% 50.0% cfg weights
YOLOv4pacsp 640 49.8% 68.4% 54.3% 30.1% 54.0% 63.4% cfg weights
YOLOv4pacsp-x 640 52.2% 70.5% 56.8% 32.7% 56.3% 65.9% cfg weights
YOLOv4pacsp-s-mish 640 40.8% 59.5% 44.3% 22.4% 44.6% 51.8% cfg weights
YOLOv4pacsp-mish 640 50.9% 69.4% 55.5% 31.2% 55.0% 64.7% cfg weights
YOLOv4pacsp-x-mish 640 52.8% 71.1% 57.5% 33.6% 56.9% 66.6% cfg weights
Model Test Size APval AP50val AP75val APSval APMval APLval cfg weights
YOLOv4 640 49.7% 68.2% 54.3% 32.9% 54.8% 63.7% cfg weights
YOLOv4pacsp-s 640 38.9% 57.7% 42.2% 21.9% 43.3% 51.9% cfg weights
YOLOv4pacsp 640 49.4% 68.1% 53.8% 32.7% 54.2% 64.0% cfg weights
YOLOv4pacsp-x 640 51.6% 70.1% 56.2% 35.3% 56.4% 66.9% cfg weights
YOLOv4pacsp-s-mish 640 40.7% 59.5% 44.2% 25.3% 45.1% 53.4% cfg weights
YOLOv4pacsp-mish 640 50.8% 69.4% 55.4% 34.3% 55.5% 65.7% cfg weights
YOLOv4pacsp-x-mish 640 52.6% 71.0% 57.2% 36.4% 57.3% 67.6% cfg weights
archive
Model Test Size APval AP50val AP75val APSval APMval APLval cfg weights
YOLOv4 640 48.4% 67.1% 52.9% 31.7% 53.8% 62.0% cfg weights
YOLOv4pacsp-s 640 37.0% 55.7% 40.0% 20.2% 41.6% 48.4% cfg weights
YOLOv4pacsp 640 47.7% 66.4% 52.0% 32.3% 53.0% 61.7% cfg weights
YOLOv4pacsp-x 640 50.0% 68.3% 54.5% 33.9% 55.4% 63.7% cfg weights
YOLOv4pacsp-s-mish 640 38.8% 57.8% 42.0% 21.6% 43.7% 51.1% cfg weights
YOLOv4pacsp-mish 640 48.8% 67.2% 53.4% 31.5% 54.4% 62.2% cfg weights
YOLOv4pacsp-x-mish 640 51.2% 69.4% 55.9% 35.0% 56.5% 65.0% cfg weights
Model Test Size APval AP50val AP75val APSval APMval APLval cfg weights
YOLOv4 672 47.7% 66.7% 52.1% 30.5% 52.6% 61.4% cfg weights
YOLOv4pacsp-s 672 36.6% 55.5% 39.6% 21.2% 41.1% 47.0% cfg weights
YOLOv4pacsp 672 47.2% 66.2% 51.6% 30.4% 52.3% 60.8% cfg weights
YOLOv4pacsp-x 672 49.3% 68.1% 53.6% 31.8% 54.5% 63.6% cfg weights
YOLOv4pacsp-s-mish 672 38.6% 57.7% 41.8% 22.3% 43.5% 49.3% cfg weights
(+BoF) 640 39.9% 59.1% 43.1% 24.4% 45.2% 51.4% weights
YOLOv4pacsp-mish 672 48.1% 66.9% 52.3% 30.8% 53.4% 61.7% cfg weights
(+BoF) 640 49.3% 68.2% 53.8% 31.9% 54.9% 62.8% weights
YOLOv4pacsp-x-mish 672 50.0% 68.5% 54.4% 32.9% 54.9% 64.0% cfg weights
(+BoF) 640 51.0% 69.7% 55.5% 33.3% 56.2% 65.5% weights

Requirements

docker (recommanded):

# create the docker container, you can change the share memory size if you have more.
nvidia-docker run --name yolov4 -it -v your_coco_path/:/coco/ -v your_code_path/:/yolo --shm-size=64g nvcr.io/nvidia/pytorch:20.11-py3

# apt install required packages
apt update
apt install -y zip htop screen libgl1-mesa-glx

# pip install required packages
pip install seaborn thop

# install mish-cuda if you want to use mish activation
# https://github.com/thomasbrandon/mish-cuda
# https://github.com/JunnYu/mish-cuda
cd /
git clone https://github.com/JunnYu/mish-cuda
cd mish-cuda
python setup.py build install

# go to code folder
cd /yolo

local:

pip install -r requirements.txt

※ For running Mish models, please install https://github.com/thomasbrandon/mish-cuda

Training

python train.py --device 0 --batch-size 16 --img 640 640 --data coco.yaml --cfg cfg/yolov4-pacsp.cfg --weights '' --name yolov4-pacsp

Testing

python test.py --img 640 --conf 0.001 --batch 8 --device 0 --data coco.yaml --cfg cfg/yolov4-pacsp.cfg --weights weights/yolov4-pacsp.pt

Citation

@article{bochkovskiy2020yolov4,
  title={{YOLOv4}: Optimal Speed and Accuracy of Object Detection},
  author={Bochkovskiy, Alexey and Wang, Chien-Yao and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2004.10934},
  year={2020}
}
@inproceedings{wang2020cspnet,
  title={{CSPNet}: A New Backbone That Can Enhance Learning Capability of {CNN}},
  author={Wang, Chien-Yao and Mark Liao, Hong-Yuan and Wu, Yueh-Hua and Chen, Ping-Yang and Hsieh, Jun-Wei and Yeh, I-Hau},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  pages={390--391},
  year={2020}
}

Acknowledgements