This repository is the official PyTorch implementation of VISinger2.
- Apr 10 2023: Add egs/visinger2_flow: add flow to VISinger2 to get a more flexible prior distribution.
- Jan 31 2023: Modify the extraction method of gt-dur in dataset.py. Replace the dsp-wav with a sinusoidal signal as input to the HiFi-GAN decoder.
- Jan 10 2023: Init commit.
- Install python requirements: pip install -r requirements.txt
- Download the Opencpop Dataset.
- prepare data like data/opencpop (wavs, trainset.txt, testset.txt, train.list, test.list)
- modify the egs/visinger2/config.json (data/data_dir, train/save_dir)
cd egs/visinger2
bash bash/preprocess.sh config.json
cd egs/visinger2
bash bash/train.sh 0
We trained the model for 500k steps with batch size of 16.
modify the model_dir, input_dir, output_dir in inference.sh
cd egs/visinger2
bash bash/inference.sh
Some audio samples can be found in demo website.
The pre-trained model trained using opencpop is here, the config.json is here, and the result of the test set synthesized by this pre-trained model is here.
We referred to VITS, HiFiGAN, gst-tacotron and ddsp_pytorch to implement this. Thanks to swagger-coder for help building visinger2_flow.