YOLOv3 and YOLOv4 implementation in TensorFlow 2.x, with support for training, transfer training, object tracking mAP and so on...
First, clone or download this GitHub repository. Install requirements and download pretrained weights:
pip install -r ./requirements.txt
# yolov3
wget -P model_data https://pjreddie.com/media/files/yolov3.weights
# yolov3-tiny
wget -P model_data https://pjreddie.com/media/files/yolov3-tiny.weights
# yolov4
wget -P model_data https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
# yolov4-tiny
wget -P model_data https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
Start with using pretrained weights to test predictions on both image and video:
python detection_demo.py
mnist folder contains mnist images, create training data:
python mnist/make_data.py
./yolov3/configs.py
file is already configured for mnist training.
Now, you can train it and then evaluate your model
python train.py
tensorboard --logdir=log
Track training progress in Tensorboard and go to http://localhost:6006/:
Test detection with detect_mnist.py
script:
python detect_mnist.py
Results:
Custom training required to prepare dataset first, how to prepare dataset and train custom model you can read in following link:
https://pylessons.com/YOLOv3-TF2-custrom-train/
More about YOLOv4 training you can read on this link. I didn’t have time to implement all YOLOv4 Bag-Of-Freebies to improve the training process… Maybe later I’ll find time to do that, but now I leave it as it is. I recommended to use Alex's Darknet to train your custom model, if you need maximum performance, otherwise, you can use my implementation.
To learn more about Google Colab Free gpu training, visit my text version tutorial
To get detailed instructions how to use Yolov3-Tiny, follow my text version tutorial YOLOv3-Tiny support. Short instructions:
- Get YOLOv3-Tiny weights:
wget -P model_data https://pjreddie.com/media/files/yolov3-tiny.weights
- From
yolov3/configs.py
changeTRAIN_YOLO_TINY
fromFalse
toTrue
- Run
detection_demo.py
script.
To learn more about Object tracking with Deep SORT, visit Following link. Quick test:
- Clone this repository;
- Make sure object detection works for you;
- Run object_tracking.py script
YOLO FPS on COCO 2017 Dataset:
Detection | 320x320 | 416x416 | 512x512 |
---|---|---|---|
YoloV3 FPS | 24.38 | 20.94 | 18.57 |
YoloV4 FPS | 22.15 | 18.69 | 16.50 |
TensorRT FPS on COCO 2017 Dataset:
Detection | 320x320 | 416x416 | 512x512 | 608x608 |
---|---|---|---|---|
YoloV4 FP32 FPS | 31.23 | 27.30 | 22.63 | 18.17 |
YoloV4 FP16 FPS | 30.33 | 25.44 | 21.94 | 17.99 |
YoloV4 INT8 FPS | 85.18 | 62.02 | 47.50 | 37.32 |
YoloV3 INT8 FPS | 84.65 | 52.72 | 38.22 | 28.75 |
mAP on COCO 2017 Dataset:
Detection | 320x320 | 416x416 | 512x512 |
---|---|---|---|
YoloV3 mAP50 | 49.85 | 55.31 | 57.48 |
YoloV4 mAP50 | 48.58 | 56.92 | 61.71 |
TensorRT mAP on COCO 2017 Dataset:
Detection | 320x320 | 416x416 | 512x512 | 608x608 |
---|---|---|---|---|
YoloV4 FP32 mAP50 | 48.58 | 56.92 | 61.71 | 63.92 |
YoloV4 FP16 mAP50 | 48.57 | 56.92 | 61.69 | 63.92 |
YoloV4 INT8 mAP50 | 40.61 | 48.36 | 52.84 | 54.53 |
YoloV3 INT8 mAP50 | 44.19 | 48.64 | 50.10 | 50.69 |
I will give two examples, both will be for YOLOv4 model,quantize_mode=INT8 and model input size will be 608. Detailed tutorial is on this link.
- Download weights from links above;
- In
configs.py
script choose yourYOLO_TYPE
; - In
configs.py
script setYOLO_INPUT_SIZE = 608
; - In
configs.py
script setYOLO_FRAMEWORK = "trt"
; - From main directory in terminal type
python tools/Convert_to_pb.py
; - From main directory in terminal type
python tools/Convert_to_TRT.py
; - In
configs.py
script setYOLO_CUSTOM_WEIGHTS = f'checkpoints/{YOLO_TYPE}-trt-{YOLO_TRT_QUANTIZE_MODE}–{YOLO_INPUT_SIZE}'
; - Now you can run
detection_demo.py
, best to test withdetect_video
function.
- Download weights from links above;
- In
configs.py
script choose yourYOLO_TYPE
; - In
configs.py
script setYOLO_INPUT_SIZE = 608
; - Train custom YOLO model with instructions above;
- In
configs.py
script setYOLO_CUSTOM_WEIGHTS = f"{YOLO_TYPE}_custom"
; - In
configs.py
script make sure thatTRAIN_CLASSES
is with your custom classes text file; - From main directory in terminal type
python tools/Convert_to_pb.py
; - From main directory in terminal type
python tools/Convert_to_TRT.py
; - In
configs.py
script setYOLO_FRAMEWORK = "trt"
; - In
configs.py
script setYOLO_CUSTOM_WEIGHTS = f'checkpoints/{YOLO_TYPE}-trt-{YOLO_TRT_QUANTIZE_MODE}–{YOLO_INPUT_SIZE}'
; - Now you can run
detection_custom.py
, to test custom trained and converted TensorRT model.
- Detection with original weights Tutorial link
- Mnist detection training Tutorial link
- Custom detection training Tutorial link1, link2
- Google Colab training Tutorial link
- YOLOv3-Tiny support Tutorial link
- Object tracking Tutorial link
- Mean Average Precision (mAP) Tutorial link
- Yolo v3 on Raspberry Pi Tutorial link
- YOLOv4 and YOLOv4-tiny detection Tutorial link
- YOLOv4 and YOLOv4-tiny detection training (Not fully) Tutorial link
- Convert to TensorRT model Tutorial link
- Add multiprocessing after detection (drawing bbox) Tutorial link
- Converting to TensorFlow Lite
- YOLO on Android (Leaving it for future, will need to convert everythin to java... not ready for this)
- Generating anchors
- YOLACT: Real-time Instance Segmentation
- Model pruning (Pruning is a technique in deep learning that aids in the development of smaller and more efficient neural networks. It's a model optimization technique that involves eliminating unnecessary values in the weight tensor.)