nnarain/create_navigation

Unable to re-create the blog post.

Opened this issue · 6 comments

I've been following this blog post to try and re-create it using an iRobot
https://nnarain.github.io/2021/01/06/iRobot-Create-2-Navigation.html

I am not able to recreate the part where you give two points on a grid and create a point cloud utilizing the bump sensors in simulation.
In the simulation blog post at the very end we ran the roslaunch create_gazebo maze.launch command and opened up the Gazebo how you had it in the blog post, but in the navigation blog post when we run the final command in the Navigation blog post: roslaunch create_gazebo maze.launch nav_mode:=map when the Roomba moves in the GIF on the blog post, it creates a point cloud, but for us we can get it to move in RVIZ but not create the point cloud when it is near a wall, and it is not updating the map based on the borders.

We have this error:
[ERROR] Error computing light sensor poisitions: "right_front_light_sensor_link" passed to lookupTransform argument source_frame does not exist. and it says this for all 6 light sensors.

In your Navigation blog post where you give it a navigation point on the grid and it creates a map using the bump sensors, we are not able to re-create it. When we try roslaunch create_gazebo maze.launch nav_mode:=mapping in RVIZ when we give a navigation point, it tries to get there but it gets blocked by a wall and does not build or map the wall, unlike your GIF where you had a point cloud map the maze.

Are you using my fork of the create_robot package?

The light sensors are defined in the URDF there:
https://github.com/nnarain/create_robot/blob/melodic-devel/create_description/urdf/create_2.urdf.xacro

Thank you, we fixed the simulation, and now we want to test it on our actual iRobot. I have my iRobot hooked up to my Pi through a USB-TTL connection. How do I use roslaunch to launch the command to map using the actual iRobot?

Should be the same command. The idea is to have octomap_server running to start exploring. Also keep in mind the light sensors in gazebo are simulated with ray casts. In real life they aren't as reliable (subject to ambient light an what not). I had limited success with it. You might want to consider getting a cheap-ish off the shelf lidar for something more reliable. I have one of these: https://ca.robotshop.com/products/ydlidar-x2-360-laser-scanner though I've not actually integrated it. Depends on what you want to accomplish.

To clarify, launching the nav stack should be the same. You will need to launch the create_driver on the robot

Thank you! We were able to implement our lidar and autonomous navigation and have been testing the past few days :)