Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations
This is the source code for Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations, which is accepted in Interspeech 2018, and selected as the finallist of best student paper award.
You can find the conversion sample at here.
The source code is currently a little messy, it will be refactored before the conference.