/PRBNet_PyTorch

Primary LanguageJupyter NotebookMIT LicenseMIT

PRBNet PyTorch

This is the reference PyTorch implementation for training and testing single-shot object detection and oriented bounding boxes models using the method described in

Parallel Residual Bi-Fusion Feature Pyramid Network for Accurate Single-Shot Object Detection

Ping-Yang, Chen, Ming-Ching Chang, Jun-Wei Hsieh, and Yong-Sheng Chen

TIP 2021 (arXiv pdf)

Performance

MS COCO

P5 Model

Model Test Size APtest AP50test AP75test APstest Model Description
YOLOX-x 640 51.5% - - - -
YOLOv7 640 51.4% 69.7% 55.9% 31.8% yaml
YOLOv4-PRB-CSP 640 51.8% 70.0% 56.7% 32.6% yaml
YOLOv7-PRB 640 52.5% 70.4% 57.2% 33.4% yaml

P6 Model

Model Test Size APtest AP50test AP75test FLOPs Params (M) Model Description
YOLOv7-D6 1280 56.6% 74.0% 61.8% 806.8G 154.7M
YOLOv7-E6E 1280 56.8% 74.4% 62.1% 843.2G 151.7M
PRB-FPN6-L 1280 55.9% 73.7% 61.1% 195.3G 137.5M yaml

If you find our work useful in your research please consider citing our paper:

@ARTICLE{9603994,
  author={Chen, Ping-Yang and Chang, Ming-Ching and Hsieh, Jun-Wei and Chen, Yong-Sheng},
  journal={IEEE Transactions on Image Processing}, 
  title={Parallel Residual Bi-Fusion Feature Pyramid Network for Accurate Single-Shot Object Detection}, 
  year={2021},
  volume={30},
  number={},
  pages={9099-9111},
  doi={10.1109/TIP.2021.3118953}}

If you find the backbone also well-done in your research, please consider citing the CSPNet. Most of the credit goes to Dr. Wang:

@inproceedings{wang2020cspnet,
  title={{CSPNet}: A New Backbone That Can Enhance Learning Capability of {CNN}},
  author={Wang, Chien-Yao and Mark Liao, Hong-Yuan and Wu, Yueh-Hua and Chen, Ping-Yang and Hsieh, Jun-Wei and Yeh, I-Hau},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  pages={390--391},
  year={2020}
}

Acknowledgement

Without the guidance of Dr. Mark Liao and a discussion with Dr. Wang, PRBNet would not have been published quickly in TIP and open-sourced to the community. Many of the code is borrowed from YOLOv4, YOLOv5_obb, and YOLOv7. Many thanks for their fantastic work: