Update our new system of "Robot Person Following Under Partial Occlusion"
Our computer settings:
- Ubuntu 18.04
- Melodic
- GTX 2060/ GTX 1650
- Create a conda environment and install pytorch
conda create -n mono_following python=3.8
conda activate mono_following
# This is based on your GPU settings, other settings should be careful
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
- Install python related packages:
pip install -r requirements.txt
git clone https://github.com/eric-wieser/ros_numpy
cd ros_numpy
python setup.py install
- Install cpp related packages:
- OpenCV==3.4
- Eigen==3.0+
- Download yolox-s and yolox-m, then make director
mono_tracking/scripts/detector_2d/weights
and put the checkpoints to it.
base_link->camera_link->camera_optical_link
# launch width-based monocular people tracking
# If running with rosbag, use_sim_time:=true; if the image topic is compressed, sim:=true
roslaunch mono_tracking all_mono_tracking.launch sim:=true use_sim_time:=true
# If running in real robot, use_sim_time:=false;
roslaunch mono_tracking all_mono_tracking.launch sim:=false use_sim_time:=false
- Input: /camera/color/image_raw
- Output: mono_tracking/msg/TrackArray.msg
# launch our GRR_SLT_MPF person following, use_sim_time:=true for rosbag
roslaunch mono_followng mono_following.launch use_sim_time:=true
# launch our GRR_SLT_MPF person following, use_sim_time:=false for robot running
roslaunch mono_following mono_following.launch use_sim_time:=false
- Input: mono_tracking/msg/TrackArray.msg
- Output: mono_following/msg/Target.msg
# launch control
roslaunch mono_control mono_controlling.launch
- Input: mono_following/msg/Target.msg; /bluetooth_teleop/joy
- Output: /cmd_vel
- Improve simplicity of the code
- Release evaluation results