H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression Network for Monocular Depth Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
The shared code is a Caffe implemention of our CVPR18 paper (DORN). The provided Caffe is not our internal one. But one can still use it for evaluation. We provide the pretrained models for KITTI and NYUV2 here (See Tab. 3 and Tab.4 in our paper). The code has been tested successfully on CentOS release 6.9, Cuda 9.0.176, Tesla V100, Anaconda python 2.7, Cudnn 7.0.
Our method won the 1st prize in Robust Vision Challange 2018. We ranked 1st place on both KITTI and ScanNet. Slides can be downloaded here.
This code is only for research purposes. If you use the provided Caffe, you may also need to follow the instructions of DeepLab v2 and PSPNet.
See Caffe for installation.
- Clone the respository:
git clone https://github.com/hufu6371/DORN.git
- Build and link to pycaffe:
cd $DORN_ROOT
edit Makefile.config
build pycaffe
export PYTHONPATH=$DORN_ROOT/python:$DORN_ROOT/pylayer:$PYTHONPATH
- Download our pretrained models:
mv cvpr_kitti.caffemodel $DORN_ROOT/models/KITTI/
mv cvpr_nyuv2.caffemodel $DORN_ROOT/models/NYUV2/
- Demo (KITTI and NYUV2):
python demo_kitti.py --filename=./data/KITTI/demo_01.png --outputroot=./result/KITTI
python demo_nyuv2.py --filename=./data/NYUV2/demo_01.png --outputroot=./result/NYUV2
The evaluation scripts and the groundtruth depth maps for KITTI and NYU Depth v2 are contained in the zip files. You may also need to download the predictions from Eigen et al. for the center cropping used in our evaluation scripts.
@inproceedings{FuCVPR18-DORN,
TITLE = {{Deep Ordinal Regression Network for Monocular Depth Estimation}},
AUTHOR = {Fu, Huan and Gong, Mingming and Wang, Chaohui and Batmanghelich, Kayhan and Tao, Dacheng},
BOOKTITLE = {{IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}},
YEAR = {2018}
}
Huan Fu: hufu6371@uni.sydney.edu.au