/darkflow

Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

Intro

Real-time object detection and classification. Paper: version 1, version 2.

Read more about YOLO (in darknet) and download weight files for version 2 here.

Some weights files for version 1 here

img

Dependencies

Python3, tensorflow 0.12, numpy, opencv 3.

Update

Android demo is available on Tensorflow's official github! here

YOLOv1 is up and running:

  • v1.0: yolo-full 1.1GB, yolo-small 376MB, yolo-tiny 180MB
  • v1.1: yolov1 789MB, tiny-yolo 108MB, tiny-coco 268MB, yolo-coco 937MB

YOLOv2 is up and running:

  • yolo 270MB, tiny-yolo-voc 63 MB.

Parsing the annotations

Skip this if you are not training or fine-tuning anything (you simply want to forward flow a trained net)

For example, if you want to work with only 3 classes tvmonitor, person, pottedplant; edit labels.txt as follows

tvmonitor
person
pottedplant

And that's it. darkflow will take care of the rest.

Design the net

Skip this if you are working with one of the three original configurations since they are already there. Otherwise, see the following example:

...

[convolutional]
batch_normalize = 1
size = 3
stride = 1
pad = 1
activation = leaky

[maxpool]

[connected]
output = 4096
activation = linear

...

Flowing the graph using flow

# Have a look at its options
./flow --h

First, let's take a closer look at one of a very useful option --load

# 1. Load yolo-tiny.weights
./flow --model cfg/v1/yolo-tiny.cfg --load bin/yolo-tiny.weights

# 2. To completely initialize a model, leave the --load option
./flow --model cfg/yolo-3c.cfg

# 3. It is useful to reuse the first identical layers of tiny for 3c
./flow --model cfg/yolo-3c.cfg --load bin/yolo-tiny.weights
# this will print out which layers are reused, which are initialized

All input images from default folder test/ are flowed through the net and predictions are put in test/out/. We can always specify more parameters for such forward passes, such as detection threshold, batch size, test folder, etc.

# Forward all images in test/ using tiny yolo and 100% GPU usage
./flow --test test/ --model cfg/v1/yolo-tiny.cfg --load bin/yolo-tiny.weights --gpu 1.0

Training new model

Training is simple as you only have to add option --train like below:

# Initialize yolo-3c from yolo-tiny, then train the net on 100% GPU:
./flow --model cfg/yolo-3c.cfg --load bin/yolo-tiny.weights --train --gpu 1.0

# Completely initialize yolo-3c and train it with ADAM optimizer
./flow --model cfg/yolo-3c.cfg --train --trainer adam

During training, the script will occasionally save intermediate results into Tensorflow checkpoints, stored in ckpt/. To resume to any checkpoint before performing training/testing, use --load [checkpoint_num] option, if checkpoint_num < 0, darkflow will load the most recent save by parsing ckpt/checkpoint.

# Resume the most recent checkpoint for training
./flow --train --model cfg/yolo-3c.cfg --load -1

# Test with checkpoint at step 1500
./flow --model cfg/yolo-3c.cfg --load 1500

# Fine tuning yolo-tiny from the original one
./flow --train --model cfg/v1/yolo-tiny.cfg --load bin/yolo-tiny.weights

Camera demo

./flow --model cfg/yolo-3c.cfg --load bin/yolo-3c.weights --demo camera

Migrating the graph to mobile devices (JAVA / C++ / Objective-C++)

## Saving the lastest checkpoint to protobuf file
./flow --model cfg/yolo-3c.cfg --load -1 --savepb

The name of input tensor and output tensor are respectively 'input' and 'output'. For further usage of this protobuf file, please refer to the official documentation of Tensorflow on C++ API here. To run it on, say, iOS application, simply add the file to Bundle Resources and update the path to this file inside source code.

That's all.