The model's inference time is not properly reported in the original paper. This is because the original code ignores CUDA's asynchronous execution on CPU and GPU. To measure the inference time more precisely, the processes should be synchronized before recording current time:
torch.cuda.synchronize()
We restimated the inference time of following open-source models:
methods | runtime_not_synchronized | runtime_synchronized |
---|---|---|
PENet | 0.032s | 0.161s |
ENet | 0.019s | 0.064s |
NLSPN | 0.127s | 0.130s |
ACMNet | 0.330s | 0.350s |
DeepLiDAR | 0.051s | 0.351s |
MSG-CHN | 0.011s | 0.035s |
FusionNet | 0.022s | 0.029s |
We thank wdjose for pointing out this problem. In addition, ENet is more recommanded for real-time applications.
This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Efficient Image Guided Depth Completion", developed by Mu Hu, Shuling Wang, Bin Li, Shiyu Ning, Li Fan, and Xiaojin Gong at Zhejiang University and Huawei Shanghai.
Create a new issue for any code-related questions. Feel free to direct me as well at muhu@zju.edu.cn for any paper-related questions.
- The proposed full model ranks 1st in the KITTI depth completion online leaderboard at the time of submission.
- It infers much faster than most of the top ranked methods.
- Both ENet and PENet can be trained thoroughly on 2x11G GPU.
- Our network is trained with the KITTI dataset alone, not pretrained on Cityscapes or other similar driving dataset (either synthetic or real).
The two-branch backbone is designed to thoroughly exploit color-dominant and depth-dominant information from their respective branches and make the fusion of two modalities effective. Note that it is the depth prediction result obtained from the color-dominant branch that is input to the depth-dominant branch, not a guidance map like those in DeepLiDAR and FusionNet.
To encode 3D geometric information, it simply augments a conventional convolutional layer via concatenating a 3D position map to the layer’s input.
We introduce a dilation strategy similar to the well known dilated convolutions to enlarge the propagation neighborhoods.
We design an implementation that makes the propagation from each neighbor truly parallel, which greatly accelerates the propagation procedure.
Our released implementation is tested on.
- Ubuntu 16.04
- Python 3.7.4 (Anaconda 2019.10)
- PyTorch 1.3.1 / torchvision 0.4.2
- NVIDIA CUDA 10.0.130
- 4x NVIDIA GTX 2080 Ti GPUs
pip install numpy matplotlib Pillow
pip install scikit-image
pip install opencv-contrib-python==3.4.2.17
- Download the KITTI Depth Dataset and KITTI Raw Dataset from their websites. The overall data directory is structured as follows:
├── kitti_depth
| ├── depth
| | ├──data_depth_annotated
| | | ├── train
| | | ├── val
| | ├── data_depth_velodyne
| | | ├── train
| | | ├── val
| | ├── data_depth_selection
| | | ├── test_depth_completion_anonymous
| | | |── test_depth_prediction_anonymous
| | | ├── val_selection_cropped
├── kitti_raw
| ├── 2011_09_26
| ├── 2011_09_28
| ├── 2011_09_29
| ├── 2011_09_30
| ├── 2011_10_03
Download our pre-trained models:
- PENet (i.e., the proposed full model with dilation_rate=2): Download Here
- ENet (i.e., the backbone): Download Here
Note that we don't need to decompress the pre-trained models. Just load the files of .pth.tar format directly.
A complete list of training options is available with
python main.py -h
Here we adopt a multi-stage training strategy to train the backbone, DA-CSPN++, and the full model progressively. However, end-to-end training is feasible as well.
- Train ENet (Part Ⅰ)
CUDA_VISIBLE_DEVICES="0,1" python main.py -b 6 -n e
# -b for batch size
# -n for network model
- Train DA-CSPN++ (Part Ⅱ)
CUDA_VISIBLE_DEVICES="0,1" python main.py -b 6 -f -n pe --resume [enet-checkpoint-path]
# -f for freezing the parameters in the backbone
# --resume for initializing the parameters from the checkpoint
- Train PENet (Part Ⅲ)
CUDA_VISIBLE_DEVICES="0,1" python main.py -b 10 -n pe -he 160 -w 576 --resume [penet-checkpoint-path]
# -he, -w for the image size after random cropping
CUDA_VISIBLE_DEVICES="0" python main.py -b 1 -n e --evaluate [enet-checkpoint-path]
CUDA_VISIBLE_DEVICES="0" python main.py -b 1 -n pe --evaluate [penet-checkpoint-path]
# test the trained model on the val_selection_cropped data
CUDA_VISIBLE_DEVICES="0" python main.py -b 1 -n pe --evaluate [penet-checkpoint-path] --test
# generate and save results of the trained model on the test_depth_completion_anonymous data
If you use our code or method in your work, please cite the following:
@article{hu2020PENet,
title={Towards Precise and Efficient Image Guided Depth Completion},
author={Hu, Mu and Wang, Shuling and Li, Bin and Ning, Shiyu and Fan, Li and Gong, Xiaojin},
booktitle={ICRA},
year={2021}
}
The original code framework is rendered from "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera". It is developed by Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman at MIT.
The part of CoordConv is rendered from "An intriguing failing of convolutional neural networks and the CoordConv".