modify from (https://github.com/longcw/faster_rcnn_pytorch)
fix up some bugs about GPU memory leak and can run in python3
work in python3.6, pytorch0.4.0
GTX 1060(6G)
Note: I re-implemented faster rcnn in this project when I started learning PyTorch. Then I use PyTorch in all of my projects. I still remember it costed one week for me to figure out how to build cuda code as a pytorch layer :). But actually this is not a good implementation and I didn't achieve the same mAP as the original caffe code.
This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0). So I suggest:
- You can still read and study this code if you want to re-implement faster rcnn by yourself;
- You can use the better PyTorch implementation by ruotianluo or Detectron.pytorch if you want to train faster rcnn with your own data;
This is a PyTorch implementation of Faster RCNN. This project is mainly based on py-faster-rcnn and TFFRCNN.
For details about R-CNN please refer to the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun.
- Forward for detecting
- RoI Pooling layer with C extensions on CPU (only forward)
- RoI Pooling layer on GPU (forward and backward)
- Training on VOC2007
- TensroBoard support
- Evaluation
-
Install the requirements (you can use pip or Anaconda):
conda install pip pyyaml sympy h5py cython numpy scipy conda install -c menpo opencv3 pip install easydict
-
Clone the Faster R-CNN repository
git clone git@github.com:longcw/faster_rcnn_pytorch.git
-
Build the Cython modules for nms and the roi_pooling layer
cd faster_rcnn_pytorch/faster_rcnn ./make.sh
-
Download the trained model VGGnet_fast_rcnn_iter_70000.h5 and set the model path in
demo.py
-
Run demo
python demo.py
Follow this project (TFFRCNN) to download and prepare the training, validation, test data and the VGG16 model pre-trained on ImageNet.
Since the program loading the data in faster_rcnn_pytorch/data
by default,
you can set the data path as following.
cd faster_rcnn_pytorch
mkdir data
cd data
ln -s $VOCdevkit VOCdevkit2007
Then you can set some hyper-parameters in train.py
and training parameters in the .yml
file.
Now I got a 0.661 mAP on VOC07 while the origin paper got a 0.699 mAP.
You may need to tune the loss function defined in faster_rcnn/faster_rcnn.py
by yourself.
With the aid of Crayon, we can access the visualisation power of TensorBoard for any deep learning framework.
To use the TensorBoard, install Crayon (https://github.com/torrvision/crayon)
and set use_tensorboard = True
in faster_rcnn/train.py
.
Set the path of the trained model in test.py
.
cd faster_rcnn_pytorch
mkdir output
python test.py
License: MIT license (MIT)