/prosthetic-grasping-simulation

Synthetic dataset generation for vision-based prosthetic grasping

Primary LanguageC#

Grasp Pre-shape Selection by Synthetic Training:
Eye-in-hand Shared Control on the Hannes Prosthesis

Unity 2020.3.20f1 Perception 0.11.2-preview.2

Paper     Demonstration video     Presentation video
Synthetic dataset generated     Real dataset collected     Experiments repository    

We introduce a synthetic dataset generation pipeline designed for vision-based prosthetic grasping. The method supports multiple grasps per object by overlaying a transparent parallelepiped onto each object part to grasp. The camera follows a straight line towards the object part while recording the video. The scene, initial camera position and object pose are randomized in each video.
We used 15 objects from the YCB dataset, where 7 of them have one grasp and 8 of them have multiple grasps, resulting in 31 grasp type - object part pairs.
Our work is accepted to IROS 2022.

Getting started

  • The project uses Unity 2020.3.11.f1. Find the version here and click on the Unity Hub button to download.
  • It has been tested on Windows 10/11.
  • All the necessary packages (e.g. Perception) come pre-installed in the repository, therefore no installation step is required.

Installation

  • Install Git for Windows (notice that this is a project called Git for Windows, which is not Git itself)
  • Install Git LFS.
  • Open a Command Prompt and run git lfs install to initialize it.
    Then, clone the repository: git clone https://github.com/hsp-iit/prosthetic-grasping-simulation
  • Git LFS has some problems with the file Assets/Scene/SampleScene/LightingData.asset. Therefore, download the original LightingData.asset file from here and replace it.
  • Go on the Unity Hub, click on Open and locate the downloaded repository.

Synthetic dataset generation in Unity

  • Once the project is open, ensure that the correct scene is selected: in the Project tab open Assets\Scenes and double-click on Data_collection.unity to open the scene.

  • From the top bar menu, open Edit -> Project Settings

    • In Project Settings, search for Lit Shader Mode and set it to Both. lit_shader_mode

    • In Project Settings, search for Motion Blur and disable it. motion_blur

  • [OPTIONAL] The pipeline generates the same number of videos for each grasp type - object part pair (recall, there are currently 31 pairs). 50 videos are generated for each pair, resulting in 1550 videos. From the Hierarchy tab (left-hand side) click on Simulation Scenario and its properties will appear in the Inspector tab (right-hand side). Make sure that the value of Fixed Length Scenario -> Scenario Properties -> Constants -> Iteration Count is set to 1550 and the value of Fixed Length Scenario -> Randomizers -> WristCameraMovement -> Num Iterations Per Grasp is set to 50. If you want to generate a different number of videos, change these values accordingly. For instance, to generate 10 videos for each pair, set Iteration Count to 31*10=310 and Num Iterations Per Grasp to 10. If the numbers are not consistent, the execution stops.

  • [OPTIONAL] To set the dataset output folder, go on File -> Project Settings -> Perception and click on the Change Folder button to set a net Base Path

  • 🚀 Click on the play button on top to start collecting the synthetic dataset.

  • [WARNING]: if you want to change settings, e.g., enable bounding box/semantic segmentation labeling or import your own objects, few settings need to be adjusted. These are not explained here for the sake of brevity, feel free to contact me (federico.vasile@iit.it) or open an issue and I will provide you all the instructions.

  • When the simulation is over, go on the Hierarchy tab and select WristCamera. In the Inspector tab search for Latest Generated Dataset and click on Show folder to locate the dataset folder.

Converting the generated videos into our own format

  • Once you are in the dataset folder mentioned above, you can find the labels (along with other metadata) as json files (captures_***.json) into the Dataset023982da-0257-4541-9886-d22172b6c94c folder (this is an example folder, you will have a different hash code following the Dataset name).
    All the video frames (rgb_***.png) are located under the RGB_another_hash_code_ folder.
  • We provide a script to convert the frames and labels into the structure used by our experiments pipeline. Each video will be organized according to the following path: DATASET_BASE_FOLDER/CATEGORY_NAME/OBJECT_NAME/PRESHAPE_NAME/Wrist_d435/rgb*/*.png.
    For example: ycb_synthetic_dataset/dispenser/006_mustard_bottle/power_no3/Wrist_d435/rgb*/*.png
  • To run the script, go into python_scripts/Data_collection and copy convert_dataset.py into the folder of your synthetic dataset generated (i.e. the folder containing the Dataset_hash_code_ and RGB_another_hash_code_ folders). Go into the synthetic dataset folder and run the script: python3 convert_dataset.py

Citation

@inproceedings{vasile2022,
    author    = {F. Vasile and E. Maiettini and G. Pasquale and A. Florio and N. Boccardo and L. Natale},
    title     = {Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis},
    booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    year      = {2022},
    month     = {Oct},
}

Mantainer

This repository is mantained by:

@FedericoVasile1

Related links:

  • For further details about our synthetic data generation pipeline, please refer to our paper (specifically SEC. IV) and feel free to contact me: federico.vasile@iit.it
  • A demonstration video of our model trained on the synthetic data and tested on the Hannes prosthesis is available here
  • A presentation video summarizing our work is available here
  • The synthetic dataset used in our experiments is available for download here
  • Along with the synthetic data generation pipeline, we collected a real dataset, available for download here
  • To reproduce our experiments, you need both the real and the synthetic dataset. To use our experiments pipeline, ensure that both datasets are in the correct format (we provide a script in the experiments pipeline to automatically download and correctly arrange both datasets).