/pytorch-wavenet

An implementation of WaveNet with fast generation

Primary LanguageJupyter NotebookMIT LicenseMIT

pytorch-wavenet

This is an implementation of the WaveNet architecture, as described in the original paper.

Features

  • Automatic creation of a dataset (training and validation/test set) from all sound files (.wav, .aiff, .mp3) in a directory
  • Efficient multithreaded data loading
  • Logging to TensorBoard (Training loss, validation loss, validation accuracy, parameter and gradient histograms, generated samples)
  • Fast generation, as introduced here

Requirements

  • python 3
  • pytorch 0.3
  • numpy
  • librosa
  • jupyter
  • tensorflow for TensorBoard logging

Demo

For an introduction on how to use this model, take a look at the WaveNet demo notebook. You can find audio clips generated by a simple trained model in the generated samples directory