Tacotron 2 PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions.
This implementation includes distributed and fp16 support and uses the LJSpeech dataset.
Distributed and FP16 support relies on work by Christian Sarofeen and NVIDIA's Apex Library.
- NVIDIA GPU + CUDA cuDNN
- Download and extract the LJ Speech dataset
- Clone this repo:
git clone https://github.com/NVIDIA/tacotron2.git
- CD into this repo:
cd tacotron2
- Update .wav paths:
sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt
- Install pytorch 0.4
- Install python requirements or use docker container (tbd)
- Install python requirements:
pip install requirements.txt
- OR
- Docker container
(tbd)
- Install python requirements:
python train.py --output_directory=outdir --log_directory=logdir
- (OPTIONAL)
tensorboard --logdir=outdir/logdir
python -m multiproc train.py --output_directory=/outdir --log_directory=/logdir --hparams=distributed_run=True --fp16_run=True
jupyter notebook --ip=127.0.0.1 --port=31337
- load inference.ipynb
nv-wavenet: Faster than real-time wavenet inference
This implementation uses code from the following repos: Keith Ito, Prem Seetharaman as described in our code.
We are inspired by Ryuchi Yamamoto's Tacotron PyTorch implementation.
We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan Wang and Zongheng Yang.