An Accurate Moving Object Segmentation Network for LiDAR Range-View
MotionRV_1stage is for one-stage training. After one-stage training, please put the one-stage pretrained weight into the MotionRV_2stage for two-stage training.
Download SemanticKITTI dataset from SemanticKITTI.
Our pretrained weight (training in one-stage for the best in the validation seq08, with the IoU of 73.88%) can be downloaded from OneDrive. Our pretrained weight (training in two-stage for the best in the validation seq08, with the IoU of 76.67%) can be downloaded from OneDrive.
Run auto_gen_residual_images.py to bulid residual images(num_last_n=8), and check that the path is correct before running.
Linux:
Ubuntu 18.04, CUDA 11.1+Pytorch 1.7.1
Use conda to create the conda environment and activate it:
cd MotionRV_1stage
conda env create -f environment.yml
conda activate motionrv
TorchSparse:
sudo apt install libsparsehash-dev
pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0
Check the path correctly in ddp_train.sh, and run it to train with 4 GPUs(Change according to actual situation):
cd MotionRV_1stage
bash script/ddp_train.sh
Check the path correctly in train_2stage.sh, and run it to train with single GPU.
cd MotionRV_2stage
bash script/train_2stage.sh
Check the path correctly in infer.sh, and run it to infer the predictive labels.
cd MotionRV_1stage / cd MotionRV_2stage
bash script/infer.sh
Check the path correctly in eval.sh, and run it to evaluate and get IoU which can copy in the paper.
cd MotionRV_1stage / cd MotionRV_2stage
bash script/eval.sh
You can also use our pretrained weight to validate its MOS performance.