/3DSGrasp

3DSGrasp: 3D Shape-Completion for Robotic Grasp

Primary LanguagePythonMIT LicenseMIT

3DSGrasp

Language grade: Python License: MIT

IEEE ICRA 2023 - 3DSGrasp: 3D Shape-Completion for Robotic Grasp [ Youtube Video ]

We present a grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial Point-Cloud data (PCD) to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios.

💻 Quick Start

⤵️ After installing follow the appropriate instructions if you want to:

  • run the full pipeline (from camera depth input to kinova grasping the object) ➡️ Full Pipeline
  • run only the completion network to generate shape completion on a partial.pc ➡️ Completion Network
  • run only GPD to generate grasp candidates for point cloud data of either partial.pc or complete.pc ➡️ Test GPD
  • 🚆 use our model? ➡️ Completion Network
  • 🚦 use the same train-test split of the YCB dataset? ➡️ Completion Network

🔑 Installations

To begin, clone this repository locally

git clone git@github.com:NunoDuarte/3DSGrasp.git
$ export 3DSG_ROOT=$(pwd)/3DSGrasp

This repo was tested on Ubuntu 20.04 and with ROS Noetic

🔑 Install requirements for Completion Network:

$ cd $3DSG_ROOT
$ conda create -n 3dsg_venv python=3.8  # or use virtualenv
$ conda activate 3dsg_venv
$ sh install.sh

🔑 Install requirements for full Pipeline

sudo apt install ros-noetic-moveit
  • Check oficial documentation for GPD (:warning: gpd repo was tested on Ubuntu 16.04; if you have trouble installing on Ubuntu 20.04 send an issue to us and we'll help)

🔑 Install requirements to test GPD (see grasps generated of your partial.pc or complete.pc)

  • Check oficial documentation for GPD (:warning: gpd repo was tested on Ubuntu 16.04; if you have trouble installing on Ubuntu 20.04 send an issue to us and we'll help)

📄 Step by step to run 3DSGrasp Pipeline

Open terminals:

  1. ROS KINOVA
source catkin/devel/setup.bash
roslaunch kortex_driver kortex_driver.launch

(optional) import rviz environment and/or table for collision detection

open config file 3DSG_ROOT/ROS/rviz/grasp_kinova.rviz
go to scene objects -> import -> 3DSG_ROOT/ROS/rviz/my_table -> Publish
  1. ROS KINOVA VISION
source catkin/devel/setup.bash
roslaunch kinova_vision kinova_vision_rgbd.launch device:=$IP_KINOVA
  1. Configure kinova and gpd files set an initial pose for kinova manually or (optional) set it as a .npy file and load it in reach_approach_grasp_pose.py
cd 3DSG_ROOT/ROS/src/

(optional) open reach_approach_grasp_pose.py and set location of initial_state.npy of kinova

        # Load initial state of robot (joint angles)
        initial_state = np.load('location_of_initial_state.npy')

set location of final_pose.npy and final_approach.npy (These are the best grasp and approach from GPD)

        print('Load grasp pose')
        final_pose = np.load('location_of_final_pose.npy')
        final_approach = np.load('location_of_final_approach.npy')
  1. RUN PIPELINE
source catkin_ws/devel/setup.bash
cd 3DSG_ROOT/
python main_gpd.py

if segmentation fails (partial point cloud includes artifacts then "quit every plot; imediately ctrl c (multiple times), wait to close, run again"

  1. RUN ON KINOVA
source catkin/devel/setup.bash
roslaunch kortex_examples reach_approach_grasp_pose.launch

ℹ️ Information:

  • When grasping the closure of the gripper is predefine, if you want to change open reach_approach_grasp_pose.py and change variable
approach.example_send_gripper_command(0.3)
  • For acquiring the point cloud and segmenting it run:
python main_agile.py

it saves the acquired_point cloud as original_pc and the segmented as partial_pc in tmp_data

📄 GPD for Point Cloud

To run GPD on .pcd file

cd $GPD_ROOT/build
./detect_grasps ../cfg/eigen_params.cfg $LOCATION_OF_FILE.PCD 

🎉 The pre-trained model is here (around 500 MB)!!!

Completion Network

Documentation for how to run Completion network goes here!

The pre-trained model of our Completion Network used in 3DSGrasp

Citation

If you find this code useful in your research, please consider citing our paper. Available on IEEE Xplore and ArXiv:

@INPROCEEDINGS{10160350,
  author={Mohammadi, Seyed S. and Duarte, Nuno F. and Dimou, Dimitrios and Wang, Yiming and Taiana, Matteo and Morerio, Pietro and Dehban, Atabak and Moreno, Plinio and Bernardino, Alexandre and Del Bue, Alessio and Santos-Victor, José},
  booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, 
  title={3DSGrasp: 3D Shape-Completion for Robotic Grasp}, 
  year={2023},
  volume={},
  number={},
  pages={3815-3822},
  doi={10.1109/ICRA48891.2023.10160350}
}