This repository is the wavenet-vocoder implementation with pytorch.
-
Support kaldi-like recipe, easy to reproduce the results
-
Support multi-gpu training / decoding
-
Support world features / mel-spectrogram as auxiliary features
-
Support recipes of three public databases
- CMU Arctic database:
egs/arctic
- LJ Speech database:
egs/ljspeech
- M-AILABS speech database:
egs/m-ailabs-speech
- CMU Arctic database:
- python 3.6
- virtualenv
- cuda 8.0
- cndnn 6
- nccl 2.0+ (for the use of multi-gpus)
Recommend to use the GPU with 10GB> memory.
$ git clone https://github.com/kan-bayashi/PytorchWaveNetVocoder.git
$ cd PytorchWaveNetVocoder/tools
$ make
$ cd egs/arctic/sd
$ ./run.sh
See more detail of the recipes in egs/README.md.
This is the subjective evaluation results using arctic
recipe.
You can listen the samples generated by our models from here.
arctic_raw_16k.wav
: original in arctic databasearctic_sd_16k_world.wav
: sd model with world aux feats + noise shaping with world mceparctic_si-open_16k_world.wav
: si-open model with world aux feats + noise shaping with world mceparctic_si-close_16k_world.wav
: si-close model with world aux feats + noise shaping with world mceparctic_si-close_16k_melspc.wav
: si-close model with mel-spectrogram aux featsarctic_si-close_16k_melspc_ns.wav
: si-close model with mel-spectrogram aux feats + noise shaping with stft mcepljspeech_raw_22.05k.wav
: original in ljspeech databaseljspeech_sd_22.05k_world.wav
: sd model with world aux feats + noise shaping with world mcepljspeech_sd_22.05k_melspc.wav
: sd model with mel-spectrogram aux featsljspeech_sd_22.05k_melspc_ns.wav
: sd model with mel-spectrogram aux feats + noise shaping with stft mcepm-ailabs_raw_16k.wav
: original in m-ailabs speech databasem-ailabs_sd_16k_melspc.wav
: sd model with mel-spectrogram aux feats
Please cite the following articles.
@inproceedings{tamamori2017speaker,
title={Speaker-dependent WaveNet vocoder},
author={Tamamori, Akira and Hayashi, Tomoki and Kobayashi, Kazuhiro and Takeda, Kazuya and Toda, Tomoki},
booktitle={Proceedings of Interspeech},
pages={1118--1122},
year={2017}
}
@inproceedings{hayashi2017multi,
title={An Investigation of Multi-Speaker Training for WaveNet Vocoder},
author={Hayashi, Tomoki and Tamamori, Akira and Kobayashi, Kazuhiro and Takeda, Kazuya and Toda, Tomoki},
booktitle={Proc. ASRU 2017},
year={2017}
}
@article{hayashi2018sp,
title={複数話者WaveNetボコーダに関する調査}.
author={林知樹 and 小林和弘 and 玉森聡 and 武田一哉 and 戸田智基},
journal={電子情報通信学会技術研究報告},
year={2018}
}
Tomoki Hayashi @ Nagoya University
e-mail:hayashi.tomoki@g.sp.m.is.nagoya-u.ac.jp