3Great Bay University 4Shanghai AI Laboratory
TL;DR: WildAvatar is a large-scale dataset from YouTube with 10,000+ human subjects, designed to address the limitations of existing laboratory datasets for avatar creation.
conda create -n wildavatar python=3.9
conda activate wildavatar
pip install -r requirements.txt
pip install pyopengl==3.1.4
- Download WildAvatar.zip
- Put the WildAvatar.zip under ./data/WildAvatar/.
- Unzip WildAvatar.zip
- Install yt-dlp
- Run the following scripts
- If you need key frames (RGB+MASK+SMPL, needed for SMPL Visualization and Creating Wild Avatars below),
- please download and extract images from YouTube on your own, by running
then you will find images downloaded in ./data/WildAvatar-videos.
python prepare_data.py --ytdl ${PATH_TO_YT-DLP}$
- please download and extract images from YouTube on your own, by running
- If you need video clips,
- please download video clips from YouTube on your own, by running
then you will find video clips in ./data/WildAvatar-videos.
python download_video.py --ytdl ${PATH_TO_YT-DLP}$ --output_root "./data/WildAvatar-videos"
- please download video clips from YouTube on your own, by running
- If you need raw videos (the original user-updated videos),
- please download video clips from YouTube on your own, by running
then you will find video clips in ./data/WildAvatar-videos-raw.
python download_video.py --ytdl ${PATH_TO_YT-DLP}$ --output_root "./data/WildAvatar-videos-raw" --raw
- please download video clips from YouTube on your own, by running
- Put the SMPL_NEUTRAL.pkl under ./assets/.
- Run the following script to visualize the smpl overlay of the human subject of ${youtube_ID}
python vis_smpl.py --subject "${youtube_ID}"
- The SMPL mask and overlay visualization can be found in data/WildAvatar/${youtube_ID}/smpl and data/WildAvatar/${youtube_ID}/smpl_masks
For example, if you run
python vis_smpl.py --subject "__-ChmS-8m8"
The SMPL mask and overlay visualization can be found in data/WildAvatar/__-ChmS-8m8/smpl and data/WildAvatar/__-ChmS-8m8/smpl_masks
For training and testing on WildAvatar, we currently provide the adapted code for HumanNeRF and GauHuman.
If you find our work useful for your research, please cite our paper.
@article{huang2024wildavatar,
title={WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation},
author={Huang, Zihao and Hu, ShouKang and Wang, Guangcong and Liu, Tianqi and Zang, Yuhang and Cao, Zhiguo and Li, Wei and Liu, Ziwei},
journal={arXiv preprint arXiv:2407.02165},
year={2024}
}
This project is built on source codes shared by GauHuman, HumanNeRF, and CLIFF. Many thanks for their excellent contributions!
If you have any questions, please feel free to contact Zihao Huang (zihaohuang at hust.edu.cn).