/keras-yolo3

A Keras implementation of YOLOv3 (Tensorflow backend)

Primary LanguagePythonMIT LicenseMIT

keras-yolo3

license

Contributions to this project:

  1. Make it overall easier to understand, running the scripts and test or train the models
  2. Build .pb files for TensorFlow Android usage
  3. Create inference module, easier to understand during implementation for Android in Java (under-work)

Introduction

A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K.


Quick Start

  1. Download YOLOv3 weights from YOLO website.
  2. Convert the Darknet YOLO model to a Keras model.
  3. Run YOLO detection.
wget https://pjreddie.com/media/files/yolov3.weights
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
python yolo_detection.py [OPTIONS...] --input [image_path] --image, for image detection mode, OR
python yolo_detection.py [OPTIONS...] --input [video_path] [output_path (optional)]
as your video will be stored at the same location as of original with name video_path_labeled_video

For Tiny YOLOv3, just do in a similar way, just specify model path and anchor path with --model model_file and --anchors anchor_file.

Usage

Use --help to see usage of yolo_detection.py:

usage: yolo_detection.py [-h] [--model_path MODEL_PATH]
                         [--anchors_path ANCHORS_PATH]
                         [--classes_path CLASSES_PATH] [--gpu_num GPU_NUM]
                         [--print_summary PRINT_SUMMARY] [--save_pb SAVE_PB]
                         [--image] [--input [INPUT]] [--output [OUTPUT]]

optional arguments:
  -h, --help            show this help message and exit
  --model_path MODEL_PATH
                        path to model weight file, default weights/yolo.h5
  --anchors_path ANCHORS_PATH
                        path to anchor definitions, default
                        model_data/yolo_anchors.txt
  --classes_path CLASSES_PATH
                        path to class definitions, default
                        model_data/coco_classes.txt
  --gpu_num GPU_NUM     Number of GPUs to use, default 1
  --print_summary PRINT_SUMMARY
                        Print summary of the models, default False
  --save_pb SAVE_PB     Save the tf model in pb format, default False
  --image               Image detection mode.
  --input [INPUT]       Video or Image (if with --image) input path
  --output [OUTPUT]     [Optional] Video output path (currently output frames
                        are being stored in test_data directory)

  1. MultiGPU usage: use --gpu_num N to use N GPUs. It is passed to the Keras multi_gpu_model().

Training

  1. Generate your own annotation file and class names file.
    One row for one image;
    Row format: image_file_path box1 box2 ... boxN;
    Box format: x_min,y_min,x_max,y_max,class_id (no space).
    For VOC dataset, try python voc_annotation.py
    Here is an example:

    path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3
    path/to/img2.jpg 120,300,250,600,2
    ...
    
  2. Make sure you have run python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5
    The file model_data/yolo_weights.h5 is used to load pretrained weights.

  3. Modify train.py and start training.
    python train.py
    Use your trained weights or checkpoint weights with command line option --model model_file when using yolo_detection.py Remember to modify class path or anchor path, with --classes class_file and --anchors anchor_file.

If you want to use original pretrained weights for YOLOv3:
1. wget https://pjreddie.com/media/files/darknet53.conv.74
2. rename it as darknet53.weights
3. python convert.py -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5
4. use model_data/darknet53_weights.h5 in train.py


Some issues to know

  1. The test environment is

    • Python 3.5.2
    • Keras 2.1.5
    • tensorflow 1.6.0
  2. Default anchors are used. If you use your own anchors, probably some changes are needed.

  3. The inference result is not totally the same as Darknet but the difference is small.

  4. The speed is slower than Darknet. Replacing PIL with opencv may help a little.

  5. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning.

  6. The training strategy is for reference only. Adjust it according to your dataset and your goal. And add further strategy if needed.

  7. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See this for more information on bottleneck features.