This repository contains PyTorch implementation associated with the paper:
"DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds", Li Ding and Chen Feng, CVPR 2019 (Oral).
If you find DeepMapping useful in your research, please cite:
@InProceedings{Ding_2019_CVPR,
author = {Ding, Li and Feng, Chen},
title = {DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
Requires Python 3.x, PyTorch, Open3D, and other common packages listed in requirements.txt
pip3 install -r requirements.txt
Running on GPU is highly recommended. The code has been tested with Python 3.6.5, PyTorch 0.4.0 and Open3D 0.4.0
A set of 2D simulated point clouds is provided as ./data/2D/v1_pose0.tar
. Extract the tar file:
tar -xvf ./data/2D/v1_pose0.tar -C ./data/2D/
A new sub-directory ./data/2D/v1_pose0/
will be created. In this folder, there are 256 local point clouds saved in PCD file format. The corresponding ground truth sensor poses is saved as gt_pose.mat
file, which is a 256-by-3 matrix. The i-th row in the matrix represent the sensor pose [x,y,theta] for the i-th point cloud.
To run DeepMapping, execute the script
./script/run_train_2D.sh
By default, the results will be saved to ./results/2D/
.
DeepMapping allows for seamless integration of a “warm start” to reduce the convergence time with improved performance. Instead of starting from scratch, you can first perform a coarse registration of all point clouds using incremental ICP
./script/run_icp.sh
The coarse registration can be further improved by DeepMapping. To do so, simply set INIT_POSE=/PATH/TO/ICP/RESULTS/pose_est.npy
in ./script/run_train_2D.sh
. Please see the comments in the script for detailed instruction.
The estimated sensor pose is saved as numpy array pose_est.npy
. To evaluate the registration, execute the script
./script/run_eval.sh
Absolute trajectory error will be computed as error metrics.