Paper
Demonstration video
Presentation video
Synthetic dataset generated
Real dataset collected
Experiments repository
We introduce a synthetic dataset generation pipeline designed for vision-based prosthetic grasping. The method supports multiple grasps per object by overlaying a transparent parallelepiped onto each object part to grasp. The camera follows a straight line towards the object part while recording the video. The scene, initial camera position and object pose are randomized in each video.
We used 15 objects from the YCB dataset, where 7 of them have one grasp and 8 of them have multiple grasps, resulting in 31 grasp type - object part pairs.
Our work is accepted to IROS 2022.
- The project uses Unity 2020.3.11.f1. Find the version here and click on the
Unity Hub
button to download. - It has been tested on Windows 10/11.
- All the necessary packages (e.g. Perception) come pre-installed in the repository, therefore no installation step is required.
- Install Git for Windows (notice that this is a project called Git for Windows, which is not Git itself)
- Install Git LFS.
- Open a Command Prompt and run
git lfs install
to initialize it.
Then, clone the repository:git clone https://github.com/hsp-iit/prosthetic-grasping-simulation
- Git LFS has some problems with the file
Assets/Scene/SampleScene/LightingData.asset
. Therefore, download the originalLightingData.asset
file from here and replace it. - Go on the Unity Hub, click on Open and locate the downloaded repository.
-
Once the project is open, ensure that the correct scene is selected: in the
Project
tab openAssets\Scenes
and double-click onData_collection.unity
to open the scene. -
From the top bar menu, open
Edit -> Project Settings
-
[OPTIONAL] The pipeline generates the same number of videos for each grasp type - object part pair (recall, there are currently 31 pairs). 50 videos are generated for each pair, resulting in 1550 videos. From the
Hierarchy
tab (left-hand side) click onSimulation Scenario
and its properties will appear in theInspector
tab (right-hand side). Make sure that the value ofFixed Length Scenario -> Scenario Properties -> Constants -> Iteration Count
is set to 1550 and the value ofFixed Length Scenario -> Randomizers -> WristCameraMovement -> Num Iterations Per Grasp
is set to 50. If you want to generate a different number of videos, change these values accordingly. For instance, to generate 10 videos for each pair, setIteration Count
to 31*10=310 andNum Iterations Per Grasp
to 10. If the numbers are not consistent, the execution stops. -
[OPTIONAL] To set the dataset output folder, go on
File -> Project Settings -> Perception
and click on theChange Folder
button to set a netBase Path
-
🚀 Click on the play button on top to start collecting the synthetic dataset.
-
[WARNING]: if you want to change settings, e.g., enable bounding box/semantic segmentation labeling or import your own objects, few settings need to be adjusted. These are not explained here for the sake of brevity, feel free to contact me (federico.vasile@iit.it) or open an issue and I will provide you all the instructions.
-
When the simulation is over, go on the
Hierarchy
tab and selectWristCamera
. In theInspector
tab search forLatest Generated Dataset
and click onShow folder
to locate the dataset folder.
- Once you are in the dataset folder mentioned above, you can find the labels (along with other metadata) as json files (
captures_***.json
) into theDataset023982da-0257-4541-9886-d22172b6c94c
folder (this is an example folder, you will have a different hash code following theDataset
name).
All the video frames (rgb_***.png
) are located under theRGB_another_hash_code_
folder. - We provide a script to convert the frames and labels into the structure used by our experiments pipeline. Each video will be organized according to the following path:
DATASET_BASE_FOLDER/CATEGORY_NAME/OBJECT_NAME/PRESHAPE_NAME/Wrist_d435/rgb*/*.png
.
For example:ycb_synthetic_dataset/dispenser/006_mustard_bottle/power_no3/Wrist_d435/rgb*/*.png
- To run the script, go into
python_scripts/Data_collection
and copyconvert_dataset.py
into the folder of your synthetic dataset generated (i.e. the folder containing theDataset_hash_code_
andRGB_another_hash_code_
folders). Go into the synthetic dataset folder and run the script:python3 convert_dataset.py
@inproceedings{vasile2022,
author = {F. Vasile and E. Maiettini and G. Pasquale and A. Florio and N. Boccardo and L. Natale},
title = {Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis},
booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2022},
month = {Oct},
}
This repository is mantained by:
@FedericoVasile1 |
- For further details about our synthetic data generation pipeline, please refer to our paper (specifically SEC. IV) and feel free to contact me: federico.vasile@iit.it
- A demonstration video of our model trained on the synthetic data and tested on the Hannes prosthesis is available here
- A presentation video summarizing our work is available here
- The synthetic dataset used in our experiments is available for download here
- Along with the synthetic data generation pipeline, we collected a real dataset, available for download here
- To reproduce our experiments, you need both the real and the synthetic dataset. To use our experiments pipeline, ensure that both datasets are in the correct format (we provide a script in the experiments pipeline to automatically download and correctly arrange both datasets).