Real-time object detection and classification. Paper: version 1, version 2.
Read more about YOLO (in darknet) and download weight files for version 2 here.
Some weights files for version 1 here
Python3, tensorflow 1.0, numpy, opencv 3.
Android demo is available on Tensorflow's official github! here
YOLOv1 is up and running:
- v1.0:
yolo-full1.1GB,yolo-small376MB,yolo-tiny180MB - v1.1:
yolov1789MB,tiny-yolo108MB,tiny-coco268MB,yolo-coco937MB
YOLOv2 is up and running:
yolo270MB,tiny-yolo-voc63 MB.
Skip this if you are not training or fine-tuning anything (you simply want to forward flow a trained net)
For example, if you want to work with only 3 classes tvmonitor, person, pottedplant; edit labels.txt as follows
tvmonitor
person
pottedplant
And that's it. darkflow will take care of the rest.
Skip this if you are working with one of the three original configurations since they are already there. Otherwise, see the following example:
...
[convolutional]
batch_normalize = 1
size = 3
stride = 1
pad = 1
activation = leaky
[maxpool]
[connected]
output = 4096
activation = linear
...# Have a look at its options
./flow --hFirst, let's take a closer look at one of a very useful option --load
# 1. Load yolo-tiny.weights
./flow --model cfg/yolo-tiny.cfg --load bin/yolo-tiny.weights
# 2. To completely initialize a model, leave the --load option
./flow --model cfg/yolo-3c.cfg
# 3. It is useful to reuse the first identical layers of tiny for 3c
./flow --model cfg/yolo-3c.cfg --load bin/yolo-tiny.weights
# this will print out which layers are reused, which are initializedAll input images from default folder test/ are flowed through the net and predictions are put in test/out/. We can always specify more parameters for such forward passes, such as detection threshold, batch size, test folder, etc.
# Forward all images in test/ using tiny yolo and 100% GPU usage
./flow --test test/ --model cfg/yolo-tiny.cfg --load bin/yolo-tiny.weights --gpu 1.0Training is simple as you only have to add option --train like below:
# Initialize yolo-3c from yolo-tiny, then train the net on 100% GPU:
./flow --model cfg/yolo-3c.cfg --load bin/yolo-tiny.weights --train --gpu 1.0
# Completely initialize yolo-3c and train it with ADAM optimizer
./flow --model cfg/yolo-3c.cfg --train --trainer adamDuring training, the script will occasionally save intermediate results into Tensorflow checkpoints, stored in ckpt/. To resume to any checkpoint before performing training/testing, use --load [checkpoint_num] option, if checkpoint_num < 0, darkflow will load the most recent save by parsing ckpt/checkpoint.
# Resume the most recent checkpoint for training
./flow --train --model cfg/yolo-3c.cfg --load -1
# Test with checkpoint at step 1500
./flow --model cfg/yolo-3c.cfg --load 1500
# Fine tuning yolo-tiny from the original one
./flow --train --model cfg/yolo-tiny.cfg --load bin/yolo-tiny.weights./flow --model cfg/yolo-3c.cfg --load bin/yolo-3c.weights --demo camera## Saving the lastest checkpoint to protobuf file
./flow --model cfg/yolo-3c.cfg --load -1 --savepbThe name of input tensor and output tensor are respectively 'input' and 'output'. For further usage of this protobuf file, please refer to the official documentation of Tensorflow on C++ API here. To run it on, say, iOS application, simply add the file to Bundle Resources and update the path to this file inside source code.
That's all.
