/VTFSA

Primary LanguagePython

VTFSA

This repo contains the source code and supplementary materials for the paper "Self-Attention Based Visual-Tactile Fusion Learning for Predicting Grasp Outcomes".

Datasets

The experiments are performed on two multimodal grasping dataset, including https://arxiv.org/abs/1710.05512 and https://ieeexplore.ieee.org/abstract/document/8665307.

Train & Test

Run the main_*.py files to train or test different models.

Supplementary Materials

The detailed parameter settings of these models are provided in the "Supplementary materials.PDF"