/Visual-Tactile-Data-Generation

This is the source code for our accepted paper in the journal RA_L with IROS2021.

Primary LanguagePythonMIT LicenseMIT

Cross-Modal-Visual-Tactile-Data-Generation

This is the source code for our paper:

Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses

image

Setup

We run the program on a Linux desktop using python.

Environment requirements:

  • tensorflow 2.1.0
  • tensorlfow-addons 0.12.0
  • tensorlfow-io 0.17.0
  • librosa 0.8.0
  • scipy 1.4.1
  • opencv 4.5.1

Usage

Train the model (Users could set the number of epoch):

  • Tactile-to-Visual (T2V)
pyhton CM_T2V.py --train --epoch <number>
  • Visual-to-Tactile (V2T)
pyhton CM_V2T.py --train --epoch <number>

Test the model:

  • Tactile-to-Visual (T2V)
python CM_T2V.py --test
  • Visual-to-Tactile (V2T)
python CM_V2T.py --test

Visualize the generated frictional signals:

  • T2V and V2T
python CM_T2V.py --visualize
python CM_V2T.py --visualize

Visualize the training processing:

cd logs
tensorboard --logdir=./

Evaluation:

  • Visual Classification
python visualclass.py
  • Tactile Classification
python tactileclass.py

Data: Training and testing data can be downloaded here. After extracting the compressed file, put all the folders (from the downloaded folder) in the project directory './dataset' (the same directory where the teaser locates in).

Acknowledgement

The original dataset is from LMT-108-Surface-Material database LMT-Haptic-Database. Thanks for the great work!