Code for the Paper
Mobile Video Object Detection with Temporally-Aware Feature Maps Mason Liu, Menglong Zhu, CVPR 2018
This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Proposed approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture.
Additionally, authors propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. This network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames.
This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. This model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.
- Python 3.6+
- OpenCV
- Pytorch 1.0 or Pytorch 0.4+
- torch-vision
Download Imagenet VID 2015 dataset from [link]. This is the link for ILSVRC2017 as the link for ILSVRC2015 seems to down now.
To get list of training, validation and test dataset (make sure to change path of dataset in the scripts):
- for basenet training run
datasets/get_VID_list.py
script. - for sequential training of LSTM layers run
datasets/get_VID_seqs_list.py
script.
Note: Output of this scripts is already in the repo, so no need to run it again
Two custom Pytorch Dataset classes are written in datasets/vid_dataset.py
which ingests this dataset and
provides random batch / complete data while training and validation. One class is for basenet training while other class is for sequential training where unroll length of LSTM is 10 and 10 consecutive frames from video sequence are provided as input for the same. Here we are unrolling for 10 steps as mentioned in the paper.
Make sure to be in python 3.6+ environment with all the dependencies installed.
As described in section 4.2 of the paper, model has two types of LSTM layers, one is Bottleneck LSTM layer which reduces the number of channels by 0.25 and the other is normal Conv LSTM which has same number of channels as output as that of input.
Training of multiple Conv LSTM layers is done in sequencial order i.e. fine tune and fix all the layers before the newly added LSTM layer.
Make sure to keep batch size same in lstm1, lstm2, lstm3, lstm4 and lstm5 training as the size of hidden and cell state of LSTM layers should be consistent while training. Also, make sure to keep width multiplier same.
By default, GPU is used for training. Here, freeze_net command line argument freezes the model as descriped in the paper.
Before saving the checkpoint model, model gets validated on the validation set. All the checkpoint models are saved in models
directory.
Basenet is Mobilenet V1 with SSD. Train the basenet by executing following command:
python train_mvod_basenet.py --datasets {path to ILSVRC2015 root dir} --batch_size 60 --num_epochs 30 --width_mult 1
If you want to train with any other width multiplier then change the command line argument width_mult accordingly.
For more help on command line args, execute the following command:
python train_mvod_basenet.py --help
As described in section 4.2 of the paper, first Bottleneck LSTM layer is placed after Conv13 layer and we freeze all the layers upto and including Conv13 layer. To train model with one Bottleneck LSTM layer execute following command:
python train_mvod_lstm1.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained basenet model} --width_mult 1 --freeze_net
Refer script docstring and inline comments in train_mvod_lstm1.py
for understanding of execution.
As described in section 4.2 of the paper, second Bottleneck LSTM layer is placed after Feature Map 1 layer and we freeze all the layers upto and including Feature Map 1 layer. To train model with two LSTM layers execute following command:
python train_mvod_lstm2.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 1} --width_mult 1 --freeze_net
Refer script docstring and inline comments in train_mvod_lstm2.py
for understanding of execution.
As described in section 4.2 of the paper, third Bottleneck LSTM layer is placed after Feature Map 2 layer and we freeze all the layers upto and including Feature Map 2 layer. To train model with three Bottleneck LSTM layers execute following command:
python train_mvod_lstm3.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 2} --width_mult 1 --freeze_net
Refer script docstring and inline comments in train_mvod_lstm3.py
for understanding of execution.
As described in section 4.2 of the paper, a LSTM layer is placed after Feature Map 3 layer and we freeze all the layers upto and including Feature Map 3 layer. To train model with 3 Bottleneck LSTM layers and 1 LSTM layer execute following command:
python train_mvod_lstm4.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 3} --width_mult 1 --freeze_net
Refer script docstring and inline comments in train_mvod_lstm4.py
for understanding of execution.
As described in section 4.2 of the paper, second normal LSTM layer is placed after Feature Map 4 layer and we freeze all the layers upto and including Feature Map 4 layer. To train model with 3 Bottleneck LSTM layers and 2 LSTM layer execute following command:
python train_mvod_lstm5.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 4} --width_mult 1 --freeze_net
Refer script docstring and inline comments in train_mvod_lstm5.py
for understanding of execution.
Evaluation script evaluate.py
reports validation accuracy (mAP).
For more info execute this command:
python evaluate.py --help
training data | testing data | mAP@0.5 | Params (M) | MAC (B) | |
---|---|---|---|---|---|
Bottleneck LSTM (width_mult = 1) |
ImageNet VID train | ImageNet VID validation | 54.4 | 3.24 | 1.13 |
Bottleneck LSTM (width_mult = 0.5) |
ImageNet VID train | ImageNet VID validation | 43.8 | 0.86 | 0.19 |
TODO: Train model and report metric score. Due to limited GPU resource and the huge size of Imagenet VID 2015 dataset, training of the model is taking huge amount of time. I will report the metric score here once training is done. Update : I have trained Basenet and now training of lstm1 is going on.
- PyTorch Docs. [http://pytorch.org/docs/master]
- PyTorch SSD [https://github.com/qfgaohao/pytorch-ssd]
- LSTM Object Detection [https://github.com/tensorflow/models/tree/master/research/lstm_object_detection]
Thanks a lot to [Pichao Wang] for training the model and suggesting several changes.
BSD