/waveglow-tensorflow

Tensorflow implementation of Nvidia Waveglow

Primary LanguagePythonMIT LicenseMIT

WAVEGLOW

This is a tensorflow implementation of NVIDIA/waveglow. Now some samples at step 592k are at /step_592k_samples.

Setup

First we need python3 along with Tensorflow with gpu support, the version this repository use is r1.12. Other versions may also work, but I'm not sure which version will have error.

We also need:

You can also setup the environment by the Dockerfile in the repository. However, I build the tensorflow r1.12 from source to specify the cuda version to be 9.2, thus it may take much more time to setup the docker image.

docker build -t {IMAGE_NAME_YOU_LIKE} .

Dataset Preparation

For getting the dataset prepared, first adjust the Input Path section of src/hparams.py to our dataset path, then run:

cd src
python3 dataset/procaudio.py

The metadata.csv described in src/hparams.py is as the following format:

Audio Name without extension|Text only for notation|True Text

Example:

LJ001-0008|has never been surpassed.|has never been surpassed.
LJ001-0009|Printing, then, for our purpose, may be considered as the art of making books by means of movable types.|Printing, then, for our purpose, may be considered as the art of making books by means of movable types.

We take this format as input since we use the LJ Speech Dataset as our training data. It's metadata.csv is exactly this format and it's used for TTS. If you are training on your own dataset, you can pad the text part since the vocoder will not use it:

audio1|deadbeef|deadbeef
audio2|deadbeef|deadbeef
audio3|deadbeef|deadbeef

Then there should be audio1.wav, audio2.wav and audio3.wav in the corresponding dataset_dir you specified in src/hparams.py

All audio files should be in wav format.

Training

To start training, run:

cd src
python3 main.py --use_weight_norm --truncate_sample

The configurations, hyperparams and descriptions are in src/hparams.py

TODO

  • Adding loss curve
  • Multiprocess Dataset Preparation

References