/caffe-faster-rcnn

faster rcnn c++ version. joint train; please checkout into dev branch

Primary LanguageC++OtherNOASSERTION

Special Feature for My Caffe

  • Clone from the official caffe, will continuely be up to date by the official caffe code
  • Faster rcnn joint train and test [DONE]
  • Action recognition (Two Stream) [DONE]
  • With demos including above tasks ^_^

Faster RCNN End-To-End

Disclaimer

The official Faster R-CNN code (written in MATLAB) is available here. If your goal is to reproduce the results in our NIPS 2015 paper, please use the official code.

This repository contains a C++ reimplementation of the Python code(py-faster-rcnn). This C++ implementation is built on the offcial caffe, I will continue to update this code for improvement and up-to-date by offcial caffe.

All following steps, you should do these in the $CAFFE_ROOT path.

Demo

Using sh example/FRCNN/demo_frcnn.sh, the will process five pictures in the examples/FRCNN/images , and put results into examples/FRCNN/results .

Note: You should prepare the trained caffemodel into models/FRCNN/ as ZF_faster_rcnn_final.caffemodel

Train

Using **sh examples/FRCNN/zf/train_frcnn.sh **, the will start train voc2007 data using ZF model.

  • VOCdevkit should be put into $CAFFE_ROOT
  • ln -s $YOUR_VOCdevkit_Path $CAFFE_ROOT/VOCdevkit
  • ZF pretrain model should be put into models/FRCNN/ as ZF.v2.caffemodel

Test

Using **sh examples/FRCNN/zf/test_frcnn.sh **, the will start test voc2007 test data using the trained ZF model.

  • First Step of This Shell : Test all voc-2007-test images and output results in a text file.
  • Second Step of This Shell : Compare the results with the ground truth file and calculate the mAP.

Detail

Shells and prototxts for different models are listed in the examples/FRCNN and models/FRCNN

More details in the code.

Commands, Rebase From Caffe Master

For synchronous with official caffe

Rebase the dev branch

  • git checkout dev
  • git rebase master
  • git push -f origin dev

QA

Caffe

Build Status License

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Join the chat at https://gitter.im/BVLC/caffe

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

License and Citation

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}