This code is modified from SSL_SLAM
Modifier: Wang Han, Nanyang Technological University, Singapore
[Update] AGV dataset is available online! (optional)
Ubuntu 64-bit 18.04.
ROS Melodic. ROS Installation
Follow Ceres Installation.
Follow PCL Installation.
Tested with 1.8.1
Follow OctoMap Installation.
$ sudo apt install ros-melodic-octomap*
For visualization purpose, this package uses hector trajectory sever, you may install the package by
sudo apt-get install ros-melodic-hector-trajectory-server
Alternatively, you may remove the hector trajectory server node if trajectory visualization is not needed
cd ~/catkin_ws/src
git clone https://github.com/wh200720041/mms_slam.git
cd ..
catkin_make
source ~/catkin_ws/devel/setup.bash
chmod python file
roscd mms_slam
cd src
chmod +x solo_node.py
create conda environment (you need to install conda first)
conda create -n solo python=3.7 -y
conda activate solo
install PyTorch and torchvision following the official instruction (find your cuda version)
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
conda install -c conda-forge addict rospkg pycocotools
install mmdet 2.0
roscd mms_slam
cd dependencies/mmdet
python setup.py install
it takes a while (a few minutes to install)
You may download our trained model and recorded data if you dont have realsense L515, and by defult the file should be under /home/username/Downloads
put model under mms_slam/config/
cp ~/Downloads/trained_model.pth ~/catkin_ws/src/MMS_SLAM/config/
unzip rosbag file under Download folder
cd ~/Downloads
unzip ~/Downloads/dynamic_warehouse.zip
if you would like to create the map at the same time, you can run
roslaunch mms_slam mms_slam_mapping.launch
if only localization is required, you may refer to run
roslaunch mms_slam mms_slam.launch
if you would like to test instance segmentation results only , you can run
roslaunch mms_slam mms_slam_detection.launch
if ModuleNotFoundError: No module named 'alfred', install alfrey-py from pip install
pip install alfred-py
If you have new Realsense L515 sensor, you may follow the below setup instructions
Follow Librealsense Installation
Copy realsense_ros package to your catkin folder
cd ~/catkin_ws/src
git clone https://github.com/IntelRealSense/realsense-ros.git
cd ..
catkin_make
In you launch file, uncomment realsense node like this
<include file="$(find realsense2_camera)/launch/rs_camera.launch">
<arg name="color_width" value="1280" />
<arg name="color_height" value="720" />
<arg name="filters" value="pointcloud" />
</include>
and comment rosbag play like this
<!-- rosbag
<node name="bag" pkg="rosbag" type="play" args="- -clock -r 0.4 -d 5 $(env HOME)/Downloads/dynamic_warehouse.bag" />
<param name="/use_sim_time" value="true" />
-->
The human data are collected from COCO dataset train2017.zip(18G) and val_2017.zip(1G) The AGV data are manually collected and labelled Download(1G)
cd ~/Downloads
unzip train2017.zip
unzip val2017.zip
unzip agv_data.zip
mv ~/Downloads/train2017 ~/Downloads/train_data
mv ~/Downloads/val2017 ~/Downloads/train_data
mv ~/Downloads/train_data/agv_data/* ~/Downloads/train_data/train2017
note that it takes a while to unzip
to train a model
roscd mms_slam
cd train
python train.py train_param.py
if you have multiple gpu (say 4 gpus), you can change '1' to your GPU number The trained model is under mms_slam/train/work_dirs/xxx.pth,
Thanks for A-LOAM and LOAM and LOAM_NOTED and MMDetection and SOLO.