Code for DAN-Conv: Depth Aware Non-local Convolution for LiDAR Depth Completion
.
pytorch >= 1.6.0
tqdm
fire
You need to download the entire raw KITTI dataset to train.
Warning: it weighs about 175GB, so make sure you have enough space to unzip too!
Our default settings expect that you have converted the png images to jpeg.
The path containing KITTI datasets should be organized like this:
/path/to/kitti
raw/
2011_09_26/
2011_09_26_drive_0001_sync/
image_02/
image_03/
test_depth_completion_anonymous/
image/
velodyne_raw/
val_selection_cropped/
image/
velodyne_raw/
groundtruth_depth/
cd nearest_neighbors
make
Run
python train.py --data_path /path/to/kitti --split full
Modify nproc_per_node
in script train_multi_gpus.sh
to the number of GPUs you have and adjust the batch size.
Run
. train_multi_gpus.sh