This repository contains the sim2real procedure and code for our paper titled "Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph" in ICRA 2023.
In sim2real, we adapted a people detector and SLAM from previous works, and transfered a simulated crowd navigation policy to a TurtleBot2i without any real-world training.
For more details, please refer to the project website and arXiv preprint.
For experiment demonstrations, please refer to the youtube video.
This repo only serves as a reference point for the sim2real transfer of crowd navigation.
Since there are lots of uncertainties in real-world experiments that may affect performance, we cannot guarantee that it is reproducible on your case.
- Host computer:
- CPU: Intel i7-9700 @ 3GHz
- GPU: Nvidia RTX 2080
- Memory: 32GB
- Turtlebot2i:
- On-board computer: Nvidia Jetson Xavier
- Lidar: RP-Lidar A3
- Tracking Camera: Intel Realsense T-265
- Mobile base: Kobuki base
The host computer and the turtlebot communicates through ROS by connecting to the same WiFi network.
- Host computer:
- OS: Ubuntu 20.04
- Python version: 3.8.10
- Cuda version: 11.5
- ROS version: Noetic (our code WILL NOT WORK with lower versions of ROS on host computer)
- Turtlebot2i:
- OS: Linux
- Python version: 3.8
- Cuda version: cuda is not needed unless you're running everything on board (i.e. no host computer)
- ROS version: Melodic
- Create a catkin workspace
mkdir ~/catkin_ws
cd catkin_ws
mkdir -p src
catkin_make
cd src
- Install ROS packages into your workspace
cd ~/catkin_ws/src
# turtlebot2
git clone https://github.com/turtlebot/turtlebot.git
git clone https://github.com/turtlebot/turtlebot_msgs.git
git clone https://github.com/turtlebot/turtlebot_apps.git
git clone https://github.com/turtlebot/turtlebot_interactions.git
# kobuki
git clone https://github.com/yujinrobot/kobuki_msgs.git
# RP-Lidar
git clone https://github.com/Slamtec/rplidar_ros.git
cd ~/catkin_ws
catkin_make
- Install realsense-ros following this link
- Skip this step if you're planning to use LiDAR for robot localization.
- Create a catkin workspace
mkdir ~/catkin_ws
cd catkin_ws
mkdir -p src
catkin_make
cd src
- Install ROS packages into your workspace
cd ~/catkin_ws/src
# turtlebot2
git clone https://github.com/turtlebot/turtlebot.git
git clone https://github.com/turtlebot/turtlebot_msgs.git
git clone https://github.com/turtlebot/turtlebot_apps.git
git clone https://github.com/turtlebot/turtlebot_interactions.git
# kobuki
git clone https://github.com/yujinrobot/kobuki_msgs.git
# to use lidar for SLAM
git clone https://github.com/surfertas/turtlebot2_lidar.git
git clone https://github.com/SteveMacenski/slam_toolbox.git
cd slam_toolbox
git checkout noetic-devel
rosdep install -q -y -r --from-paths src --ignore-src
cd ..
# people detector
git clone https://github.com/VisualComputingInstitute/2D_lidar_person_detection.git
cd ~/catkin_ws
catkin_make
-
In
catkin_ws/src/2D_lidar_person_detection/dr_spaam_ros/config/topics.yaml
line 14, change/segway/scan_multi
to/person_pts
to remove static obstacles from the input scans of people detector- Otherwise, the policy network may receive false positive detections because the DR-SPAAM is not very robust w.r.t. different hardware and environments.
-
Place
findloc_bgrm.py
intocatkin_ws/src/2D_lidar_person_detection
-
Download the virtual environment from this link. Create an identical virtual environment in your computer.
-
Connect the robot and host computer to the same WiFi network. In
tb2.bash
, changeROS_MASTER
to the IP address of the robot, changeROS_IP
to the IP address of the host computer.- Skip this step and all steps related to
source tb2.bash
if you're running everything on a robot's on-board computer.
- Skip this step and all steps related to
-
Clone the crowd navigation repo
-
Modify the configurations.
-
Modify the configurations in
crowd_nav/configs/config.py
andarguments.py
following instructions here -
In
crowd_nav/configs/config.py
, setaction_space.kinematics = "unicycle"
if your robot has a differential drive- Explanations on holonomic robot v.s. differential drive robot
- adjust
sim.circle_radius
,sim.arena_size
, andsim.human_num
based on your real environment
-
-
After you change the configurations, run
python train.py
-
The checkpoints and configuration files will be saved to the folder specified by
output_dir
inarguments.py
. -
Test the trained policy in simulation following instructions here, make sure the results are satisfactory (success rate is at least around 90%)
-
Create a map of the real environment using SLAM:
a. [Turtlebot] Launch the mobile base:source catkin_ws/devel/setup.bash roslaunch turtlebot2i_bringup minimal.launch
If 'no data stream, is kobuki turned on?' shows up even if the base is fully charged, we recommend unplugging the USB wire of LiDAR and restarting the Turtlebot's onboard computer.
b. [Turtlebot] Plug in the USB wire of LiDAR, launch the LiDAR:
source catkin_ws/devel/setup.bash && sudo chmod 666 /dev/ttyUSB0 && sudo chmod 666 /dev/ttyUSB1 && sudo chmod 666 /dev/ttyUSB2 roslaunch rplidar.launch
c. [Host computer] Launch SLAM and navigation
source ~/tb2.bash source ~/catkin_ws/devel/setup.bash roslaunch turtlebot_navigation laser_gmapping_demo.launch
d. [Host computer] Launch rviz
source ~/tb2.bash source ~/catkin_ws/devel/setup.bash roslaunch turbot_rviz nav.launch
e. [Host computer] Launch robot teleoperation
source ~/tb2.bash roslaunch turtlebot_teleop keyboard_teleop.launch
f. [Host computer] Teleoperate the robot around the environment until you are satisfied with the map in rviz, save the map by
rosrun map_server map_saver -f ~/map
In your home directory, you will see two files:
map.yaml
andmap.pgm
. -
Then, test the trained policy in real turtlebot in the mapped environment:
-
[Turtlebot] Launch the mobile base (see Step 2a)
-
[Turtlebot] Launch the LiDAR (see Step 2b)
-
[Host computer] Launch localization and navigation
source ~/tb2.bash source ~/catkin_ws/devel/setup.bash roslaunch turtlebot_navigation laser_amcl_demo.launch map_file:=$HOME/map.yaml
This step is ready if the terminal shows "odom received".
-
[Host computer] Launch rviz (see Step 2d)
To calibrate localization, use "2D pose estimate" to correct the initial pose of robot, and then use "2D navigation" to navigate the robot around until the localization particles converge. -
[Host computer] To filter out the static obstacles on the map and improve the people detection,
source ~/tb2.bash source ~/catkin_ws/devel/setup.bash cd ~/catkin_ws/src/2D_lidar_person_detection python findloc_bgrm.py path_to_the_map_created_in_step2
-
[Host computer] Run the DR-SPAAM people detector:
source ~/tb2.bash source ~/catkin_ws/devel/setup.bash source ~/virtual_envs/tb2/bin/activate # activate the virtual environment created in Setup -> Host computer -> Step 6 roslaunch dr_spaam_ros dr_spaam_ros.launch
-
[Turtlebot] Launch the Realsense T265 camera using
t265.launch
from this reporoslaunch t265.launch
- Skip this step if you're planning to use LiDAR for robot localization.
-
[Host computer] cd into the crowd navigation repo,
- in
trained_models/your_output_dir/arguments.py
, changeenv-name
to'rosTurtlebot2iEnv-v0'
- in
trained_models/your_output_dir/configs/config.py
, change configurations undersim2real
if needed - then run
python test.py
Type in the goal position following the terminal output, and the robot will execute the policy if everything works.
- in
-
- The robot localization from LiDAR and T265 are redundant. Remove the dependency on T265 later.
-
We only tested our code in the above listed hardware and software settings. It may work with other robots/versions of software, but we do not have any guarantee.
-
If the RL training does not converge, we recommend starting with an easier setting (fewer humans, larger circle radius, larger robot speed, etc) and then gradually increase the task difficulty during training.
-
The performance of our code can vary depending on the choice of hyperparameters and random seeds (see this reddit post). Unfortunately, we do not have time or resources for a thorough hyperparameter search. Thus, if your results are slightly worse than what is claimed in the paper, it is normal. To achieve the best performance, we recommend some manual hyperparameter tuning.
If you find the code or the paper useful for your research, please cite the following papers:
@inproceedings{liu2022intention,
title={Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph},
author={Liu, Shuijing and Chang, Peixin and Huang, Zhe and Chakraborty, Neeloy and Hong, Kaiwen and Liang, Weihang and Livingston McPherson, D. and Geng, Junyi and Driggs-Campbell, Katherine},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2023}
}
@inproceedings{liu2020decentralized,
title={Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning},
author={Liu, Shuijing and Chang, Peixin and Liang, Weihang and Chakraborty, Neeloy and Driggs-Campbell, Katherine},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2021},
pages={3517-3524}
}
Other contributors:
Peixin Chang
Kaiwen Hong
Jerry Wang
Eric Liang
If you have any questions or find any bugs, please feel free to open an issue or pull request.