This project shows how to run tiny yolov2 (20 classes) with movidius stick:
- A python convertor from yolo to caffe
- A c/c++ implementation and python wrapper for region layer of yolov2
- A sample for running yolov2 with movidius stick in images or videos
- Support NCSDK 2.0 (Thanks cpagravel!)
- Release 1.0 for NCSDK v1.0
- Refine output bboxes according to letterbox_image in YOLOV2, 01/03/2018, 01/12/2018 (Thanks nathiyaa!)
- Support multiple sticks, 12/29/2017 (Thanks ichigoi7e!)
- Process video in the sample, 12/15/2017 (Thanks ichigoi7e!)
- Fix confident offset issues in nms, 12/12/2017
The following experiments are done on an Intel NUC with ubuntu 16.04.
Please install NCSDK following https://github.com/movidius/ncsdk.
make
mvNCCompile ./models/caffemodels/yoloV2Tiny20.prototxt -w ./models/caffemodels/yoloV2Tiny20.caffemodel -s 12
There will be a file graph generated as converted models for NCS.
python3 ./detectionExample/Main.py --image ./data/dog.jpg
This loads graph by default and results will be like this:
Install caffe and config the python environment path.
sh ./models/convertyo.sh
Tips:
Please ignore the error message similar as "Region layer is not supported".
The converted caffe models should end with "prototxt" and "caffemodel".
Please update parameters (biases, object names, etc) in ./src/CRegionLayer.cpp, and parameters (dim, blockwd, targetBlockwd, classe, etc) in ./detectionExample/ObjectWrapper.py.
Please read ./src/CRegionLayer.cpp and ./detectionExample/ObjectWrapper.py for details.
Research Only