CUDA 12.0
CUDNN 8.5.0
torch 1.7.1
torchvision 0.8.0
pip install -r LRRU/requirements.txt
pip3 install opencv-python
pip3 install opencv-python-headless
We used WANDB to visualize and track our experiments.
We used NVIDIA Apex for multi-GPU training as NLSPN.
Apex can be installed as follows:
$ cd PATH_TO_INSTALL
$ git clone https://github.com/NVIDIA/apex
$ cd apex
$ pip install -v --disable-pip-version-check --no-build-isolation --no-cache-dir ./
Build and install DCN module following here.
KITTI DC dataset is available at the KITTI DC Website and the data structure is:
.
├── depth_selection
│ ├── test_depth_completion_anonymous
│ │ ├── image
│ │ ├── intrinsics
│ │ └── velodyne_raw
│ ├── test_depth_prediction_anonymous
│ │ ├── image
│ │ └── intrinsics
│ └── val_selection_cropped
│ ├── groundtruth_depth
│ ├── image
│ ├── intrinsics
│ └── velodyne_raw
├── train
│ ├── 2011_09_26_drive_0001_sync
│ │ ├── image_02
│ │ │ └── data
│ │ ├── image_03
│ │ │ └── data
│ │ ├── oxts
│ │ │ └── data
│ │ └── proj_depth
│ │ ├── groundtruth
│ │ └── velodyne_raw
│ └── ...
└── val
├── 2011_09_26_drive_0002_sync
└── ...
$ sh train.sh
# train LRRU_Mini model
# python LRRU/train_apex.py -c train_lrru_mini_kitti.yml
# train LRRU_Tiny model
# python LRRU/train_apex.py -c train_lrru_tiny_kitti.yml
# train LRRU_Small model
# python LRRU/train_apex.py -c train_lrru_small_kitti.yml
# train LRRU_Base model
# python LRRU/train_apex.py -c train_lrru_base_kitti.yml
# download the pretrained model and add it to corresponding path.
$ sh val.sh
# val LRRU_Mini model
# python LRRU/val.py -c val_lrru_mini_kitti.yml
# val LRRU_Tiny model
# python LRRU/val.py -c val_lrru_tiny_kitti.yml
# val LRRU_Small model
# python LRRU/val.py -c val_lrru_small_kitti.yml
# val LRRU_Base model
# python LRRU/val.py -c val_lrru_base_kitti.yml
Methods | Pretrained Model | Loss | RMSE[mm] | MAE[mm] | iRMSE[1/km] | iMAE[1/km] |
---|---|---|---|---|---|---|
LRRU-Mini | download link | L1 + L2 | 806.3 | 210.0 | 2.3 | 0.9 |
LRRU-Tiny | download link | L1 + L2 | 763.8 | 198.9 | 2.1 | 0.8 |
LRRU-Small | download link | L1 + L2 | 745.3 | 195.7 | 2.0 | 0.8 |
LRRU-Base | download link | L1 + L2 | 729.5 | 188.8 | 1.9 | 0.8 |
Thanks the ACs and the reviewers for their insightful comments, which are very helpful to improve our paper!
We are especially grateful to NLSPN, for their novel work and the excellent open source code! We are appreciative for IP_Basic, GuideNet, and DySPN, which have inspired us in model design.
In addition, thanks for all open source projects that have effectively promoted the development of the depth completion communities!
Non-learning methods: RAL_Non-Learning_DepthCompletion,
Supervised methods:
S2D,
CSPN,
PENet,
ACMNet,
MDANet,
DeepLiDAR,
MSG-CHN,
Sparse-Depth-Completion,
GAENet,
ABCD,
SemAttNet,
CompletionFormer,
ReDC.
Unsupervised methods: S2D, ScaffFusion-SSL, KBNet, ScaffFusion, VOICED.
If I have accidentally forgotten your work, please contact me to add.
@InProceedings{LRRU_ICCV_2023,
author = {Wang, Yufei and Li, Bo and Zhang, Ge and Liu, Qi and Gao Tao and Dai, Yuchao},
title = {LRRU: Long-short Range Recurrent Updating Networks for Depth Completion},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
year = {2023},
}