/Outdoor_Navigation

Navigation using move_base and GPS waypoints

Primary LanguageC++

outdoor_navigation

Broken Outdoor Navigation...

An outdoor navigation framework that moves robot from its current position to the final goal using GPS coordinates from Google Map. This package has many limitation and is used mainly to test navigation as a whole. However, this can be useful for people who would like to:

  • Use ROSBRIDGE and Javascript for publishing and subscribing.
  • Modify global_planner in move_base and send goal to move_base.
  • Configuration file for Google Cartorapher, robot_localization, move_base, sensors.
  • Semantic segmentation and how to convert them to 3D Point Cloud by using Realsense D435.

The navigation framework:

ROS_Web_Google

The ROS_Web_Google is used to:

  • Find and send waypoints to robot.
  • Track GPS data (used intially to track raw GPS data and Kalman filter output).

To use:

  • Register the key to use Google API: https://developers.google.com/maps/documentation/javascript/get-api-key. Then, modify the key in GoogleMap.html
  • For publishing and subscribing using Javacript. Firstly, modify the IP in 'url' part in GoogleMapRosInit.js by using your IP address. Secondly, run roslaunch rosbridge_server rosbridge_websocket.launch.
  • Add current and destination -> Get path -> Publish Path. The /jsPath will be pulished.

catkin_ws

Contains some drivers and other files:

To run the Pioneer 3-DX and all the sensors (IMU UM7, 2D Lidar Rplidar A2, Adafruit GPS, no camera): roslaunch p3dx_2dnav rosaria_sensors.launch. To run the realsense d435 camera: roslaunch realsense2_camera rs_pointcloud.launch. This will publish one channel black and white PointCloud, and RGB image. To run move_base: roslaunch p3dx_2dnav p3dx_real.launch. If you have strong computer, feel free to fine-tune DWA to improve it performance.

sidewalk_detection

Please follow this tutorial for semantic segmentation: https://jsk-docs.readthedocs.io/projects/jsk_recognition/en/latest/deep_learning_with_image_dataset/overview.html. The jsk_output contains the pretrained model (npz file) for sidewalk detection around Monash Uni. This is trained using image size of 424x240 so it is better to modify the camera image size to match the traning dataset.

The image_processing in catkin_ws contains the code to convert 2D semantic segmentation to 3D sidewalk and non-sidewalk PointCloud2. The required input is:

  • camera/color/camera_info (for intrinsic paramer, subscribe only once at the beginning).
  • camera/aligned_depth_to_color/image_raw (output from realsense with align_depth = true in launch file)
  • fcn_object_segmentation/output (output from JSK semantic segmentation).

The output is:

  • semantic_sidewalk (PointCloud2 of sidewwalk)
  • semantic_obstacle (PointCloud2 of non-sidewalk, or background)

To use:

  • Run rosrun image_processing depth_to_pc2.

Localization

You will need the TF tree that connects /UTM to /map frame. This can be done by using robot_localization package: https://github.com/cra-ros-pkg/robot_localization.