/SpeechVAE

This repository contains the code to reproduce the core results from the paper "Learning Latent Representations for Speech Generation and Transformation".

Primary LanguagePythonApache License 2.0Apache-2.0

Learning Latent Representations for Speech Generation and Transformation

This repository contains the code to reproduce the core results from the paper Learning Latent Representations for Speech Generation and Transformation.

To cite this work, please use

@inproceedings{hsu2017learning,
  title={Learning Latent Representations for Speech Generation and Transformation},
  author={Hsu, Wei-Ning and Zhang, Yu and Glass, James},
  booktitle={Interspeech},
  pages={1273--1277},
  year={2017},
}

Dependencies

This project uses Python 2.7.6. Before running the code, you have to install

The former 6 dependencies can be installed using pip by running

pip install -r requirements.txt

The last one requires Kaldi before a specific commit (d1e1e3b). If you don't have Kaldi before that version, you can install both Kaldi and Kaldi-Python by running

make all

Usage

The code structure follows Kaldi's convention. Scripts for each dataset can be found in egs/<dataset> folder. If you have questions, please write an email to wnhsu@csail.mit.edu

TIMIT

To reproduce the experiments for TIMIT, run:

cd egs/timit
./run_spec.sh --TIMIT_RAW_DATA <timit_raw_data_dir>

Orthogonality between latent attribute representations

Orthogonality

Transforming a male speaker to a female speaker

Before

m2f(before)

After

m2f(after)

Transforming a female speaker to a male speaker

Before

f2m(before)

After

f2m(after)