Hou-Ning Hu*, Yen-Chen Lin*, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, Min Sun (*indicate equal contribution)
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 (Oral presentation)
Official Implementation of CVPR 2017 Oral paper "Deep 360 Pilot: Learning a Deep Agent for Piloting through 360◦ Sports Videos" in Tensorflow.
Project page: https://aliensunmin.github.io/project/360video/
Paper: High resolution, ArXiv pre-print, Open access
- Linux
- NVIDIA GPU + CUDA 8.0 + CuDNNv5.1
- Python 2.7 with numpy
- Tensorflow 1.2.1
-
Change the version you like:
We provide both
0.12
and1.2.1
version of Tensorflow implementation You may choose the ideal version to use
git clone http://github.com/eborboihuc/Deep360Pilot-CVPR17.git
cd Deep360Pilot/misc
git clone http://github.com/yenchenlin/Deep360Pilot-optical-flow.git
- Download our dataset and pre-trained model
After run the scripts you will see multiple links
python require.py
Please download our model and dataset and place it under ./checkpoint
and ./data
, respectively.
To train a model with downloaded dataset:
python main.py --mode train --gpu 0 -d bmx -l 10 -b 16 -p classify --opt Adam
Then
python main.py --mode train --gpu 0 -d bmx -l 10 -b 16 -p regress --opt Adam --model checkpoint/bmx_16boxes_lam10.0/bmx_lam1_classify_best_model
To test with an existing model:
python main.py --mode test --gpu 0 -d bmx -l 10 -b 16 -p classify --model checkpoint/bmx_16boxes_lam10.0/bmx_lam1_classify_best_model
Or,
python main.py --mode test --gpu 0 -d bmx -l 10 -b 16 -p regress --model checkpoint/bmx_16boxes_lam10.0/bmx_lam10.0_regress_best_model
To get prediction with an existing model:
python main.py --mode pred --model checkpoint/bmx_16boxes_lam10.0/bmx_lam10.0_regress_best_model --gpu 0 -d bmx -l 10 -b 16 -p regress -n zZ6FlZRLvek_6
Please download the trained model for TensorFlow v1.2.1 here.
You can use --model {model_path}
in main.py
to load the model.
We provide a small testing clip-based datafile. Please download it here. And you can use this toy datafile to go though our data process pipeline.
If you want to reproduce the results on our dataset, please download the dataset here, label here and place it under ./data
.
Please download the clip-based dataset here And then use code from here to convert it to our input format.
If you find our code useful for your research, please cite
@InProceedings{Hu_2017_CVPR,
author = {Hu, Hou-Ning and Lin, Yen-Chen and Liu, Ming-Yu and Cheng, Hsien-Tzu and Chang, Yung-Ju and Sun, Min},
title = {Deep 360 Pilot: Learning a Deep Agent for Piloting Through 360deg Sports Videos},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}
Hou-Ning Hu / @eborboihuc and Yen-Chen Lin / @yenchenlin