Text2Video
This is code for ICASSP 2022: "Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary". Project Page
Introduction
With the advance of deep learning technology, automatic video generation from audio or text has become an emerging and promising research topic. In this paper, we present a novel approach to synthesize video from text. The method builds a phoneme-pose dictionary and trains a generative adversarial network (GAN) to generate video from interpolated phoneme poses. Compared to audio-driven video generation algorithms, our approach has a number of advantages: 1) It only needs a fraction of the training data used by an audio-driven approach; 2) It is more flexible and not subject to vulnerability due to speaker variation; 3) It significantly reduces the preprocessing, training and inference time. We perform extensive experiments to compare the proposed method with state-of-the-art talking face generation methods on a benchmark dataset and datasets of our own. The results demonstrate the effectiveness and superiority of our approach.
Data / Preprocessing
Set up
- Git clone repo
git clone git@github.com:sibozhang/Text2Video.git
-
Download and install modified vid2vid repo vid2vid
-
Download Trained model
Please build 'checkpoints' folder in vid2vid folder and put trained model in it.
VidTIMIT fadg0 (English, Female) Dropbox
百度云链接:https://pan.baidu.com/s/1L1cvqwLu_uqN2cbW-bDgdA 密码:hygt
Xuesong (Chinese, Male) Dropbox
百度云链接:https://pan.baidu.com/s/1lhYRakZLnkQ8nqMuLJt_dA 密码:40ob
-
Prepare data and folder in the following order
Text2Video ├── *phoneme_data ├── model ├── ... vid2vid ├── ... venv ├── vid2vid
-
Setup env
sudo apt-get install sox libsox-fmt-mp3
pip install zhon
pip install moviepy
pip install ffmpeg
pip install dominate
pip install pydub
For Chinese, we use vosk to get timestamp of each words. Please install vosk from https://alphacephei.com/vosk/install and unpack as 'model' in the current folder. or install:
pip install vosk
pip install cn2an
pip install pypinyin
Testing
- Activate vitrual environment vid2vid
source ../venv/vid2vid/bin/activate
- Generate video with real audio in English
sh text2video_audio.sh $1 $2
Generate video with TTS audio in English
sh text2video_tts.sh $1 $2 $3
Generate video with TTS audio in Chinese
sh text2video_tts.sh $1 $2 $3
$1: "input text" $2: person $3: fill f for female or m for male (gender)
Example 1. test VidTIMIT data with real audio.
sh text2video_audio.sh "She had your dark suit in greasy wash water all year." fadg0 f
Example 2. test VidTIMIT data with TTS audio.
sh text2video_tts.sh "She had your dark suit in greasy wash water all year." fadg0 f
Example 3. test with Chinese female TTS audio.
sh text2video_tts_chinese.sh "正在为您查询合肥的天气情况。今天是2020年2月24日,合肥市今天多云,最低温度9摄氏度,最高温度15摄氏度,微风。" henan f
Training with your own data
Citation
Please cite our paper in your publications.
Sibo Zhang, Jiahong Yuan, Miao Liao, Liangjun Zhang. PDF Result Video
@INPROCEEDINGS{9747380,
author={Zhang, Sibo and Yuan, Jiahong and Liao, Miao and Zhang, Liangjun},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Text2video: Text-Driven Talking-Head Video Synthesis with Personalized Phoneme - Pose Dictionary},
year={2022},
volume={},
number={},
pages={2659-2663},
doi={10.1109/ICASSP43922.2022.9747380}
}
@article{zhang2021text2video,
title={Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary},
author={Zhang, Sibo and Yuan, Jiahong and Liao, Miao and Zhang, Liangjun},
journal={arXiv preprint arXiv:2104.14631},
year={2021}
}
Appendices
Ackowledgements
This code is based on the vid2vid framework.