A Python/Pytorch app for easily synthesising human voices
- Windows 10 or Ubuntu 20.04+ operating system
- NVIDIA GPU with at least 4GB of memory
- Up-to-date NVIDIA driver (version 450.36+)
- Automatic dataset generation
- Easy train start/stop
- Support for kindle & audible as data sources
- Data importing/exporting
- Simplified training & synthesis
- Word replacement suggestion
- Windows & Linux support
https://www.youtube.com/playlist?list=PLk5I7EvFL13GjBIDorh5yE1SaPGRG-i2l
https://voice-sharing-hub.herokuapp.com/
- Add support for alternative models
- Improved batch size estimation
- Multi-GPU support
- AMD GPU support
- Additional language support
Available for those with a manual install
- Try out existing voices at uberduck.ai and Vocodes
- Synthesize in Colab (created by mega b#6696)
- Train in Colab (created by ericstheguy)
- Generate youtube transcription (created by mega b#6696)
- Wit.ai transcription
This project uses a reworked version of Tacotron2 & Waveglow. All rights for belong to NVIDIA and follow the requirements of their BSD-3 licence.
Additionally, the project uses DSAlign, Silero & hifi-gan.
Thank you to Dr. John Bustard at Queen's University Belfast for his support throughout the project.
Supported by uberduck.ai, reach out to them for live model hosting.
Also a big thanks to the members of the VocalSynthesis subreddit for their feedback.
Finally thank you to everyone raising issues and contributing to the project.