An implementation of object grasping using Franka Panda Robot and Contact graspnet grasp generation algorithm.
- Install Contact Graspnet, Segmentation, Franka ROS and Realsense ROS using the links in Prerequisites.
- Create a python package inside franka_ros using Panda_Robot_side
- Copy the contents of grasp_generation into ws_graspnet/src/contact_graspnet_ros/contact_graspnet/contact_graspnet folder.
Steps
- Start Camera
Starting camera and making images and depth available in ros topics
roslaunch realsense2_camera rs_aligned_depth.launch tf_prefix:=_camera enable_pointcloud:=true
- Start Segmentation
Using camera ros topics generation of segmentation masks (generated as ros topics) for grasp generation.
Navigate to UnseenObjectClustering folder
cd /home/(your_workspace)../UnseenObjectClustering/
Starting Rviz to wait for viewing segmentation output
rosrun rviz rviz -d ./ros/segmentation.rviz
Start segmentation (parameter '0' represents GPU selection)
./experiments/scripts/ros_seg_rgbd_add_test_segmentation_realsense.sh 0
This will start the segementation and outputs can be seen in Rviz. Also associated topics will be publishing the segmented images.
- Start Grasp generation
To generate grasp positions (will start publisher) using the inputs from camera topics and segmentation topics. The script will be in ws_graspnet/src/contact_graspnet_ros/contact_graspnet/contact_graspnet folder
python generate_grasp.py
- Start Franka Panda Robot
- Start Moveit package
- manipulating robot for grasping
python panda_move_testmoveit.py