- Turtlebot SLAM, RRT path planning and target detection using SIFT
- Clone
rrt_exploration
package in youcatkin_ws
- Install ROS package navigation stack, for kinetic run
sudo apt-get install ros-kinetic-navigation
- Ensure that you have ROS package gmapping, for kinetic run
sudo apt-get install ros-kinetic-navigation
- clone
rrt_exploration_tutorials
for simulation - clone ROS package
rrr_exploration
for Physical Turtlbot - clone ROS package
urg_node
- clone
robot_explorer
from source - Install openCV
- Make workspace with command
catkin_make
in your~/[catkin_ws]
- Source workspace by running
source devel/setup.bash
in youcatkin_ws
For more information visit: RRT wiki, Hokuyo Driver wiki
- check for usb connectivity by running:
ls -l /dev/ttyACM0
- to publish to the scan topic using live hokuyo sensor data run:
rosrun urg_node urg_node
- Controller config can be evaluated by running:
jstest /dev/input/js0
- Create
my_ps3_teleop.launch
to reflect controller config - To test using controller teleop run:
roslaunch turtlebot_teleop my_ps3_teleop.launch
- Turtlbot hardware setup
- To check for turtlbot usb connection run
ls -al /dev | grep -i usb
- You should see kobuki usb connection
- More about kobuki robot
Simulates and stages bot in RVIZ and Gazebo, contains wall follower node which subscribes to rrt_exploration
topic /robot_1/base_scan
This also publishes to topic /robot_1/mobile_base/commands/velocity
which drives the bot
- run:
roslaunch robot_explorer wall_follow.launch
- For more information: Wall Follower
RRT Path planning using goals provided by service provider fetch_goal.py
. The service provider posts 10 goals each further from the origin than the last. This script can be easily modified to post targets around the map for exploration and target discovery.
- To run:
roslaunch rrt_exploration_tutorials single_simulated_house.launch
- To run service service provider run:
python fetch_goal.py
- Green line is the robots current trajectory
- Scale invariant feature detection which takes an image of an object and a target image as input, and outputs a graphical image of the objects location as output, if the object is found.
- To test SIFT with test image run:
python matching_script.py
which tests on image:test_pic.jpg
To run with turtlebot you need to connect PC to Turtlebot and Hokuyo Laser Scanner, refer to Working with hardware for details.
- run:
roslaunch robot_explorer setup.launch
- run:
python fetch_goal.py
- This launches gmapping, turtlebot navigatin stack, RRT path planning, and Hokuyo driver related nodes.
- This can drive robot and perform mapping and localization, but had some trouble path planning.
- ROS subscription issues for path planner, I suspect this is some sort of namespace issues, preventing communication, despite publishing to correct topics.
- Still need simple SIFT node to publish camera data to the network for target detection and localization.
- Use images captured from RealSense camera