Official implementation of paper "Lifting 2D StyleGAN for 3D-Aware Face Generation".
You can create the conda environment by using conda env create -f environment.yml
Download our pre-trained StyleGAN and face embedding network from here. Unzip them into the pretrained/
folder. Then you can start training by:
python tools/train.py config/ffhq_256.py
Note that you do not need an image dataset here becuase we simply lift the StyleGAN2 using images generated by itself.
We use a re-cropped version of FFHQ to fit the style of our face embedding network. You can find this dataset here. The cats dataset can be found here.
To train a StyleGAN2 from you own dataset, check the content under stylegan2-pytorch
folder. After training a StyleGAN2, you can lift it using our training code. Note that our method might not apply to other kind of images, if they are very different from human faces.
You can generate random samples from a lifted gan by running:
python tools/generate_images.py /path/to/the/checkpoint --output_dir results/
Make sure the checkpoint file and its config.py
file are under the same folder.
You can generate GIF animations of rotated faces by running:
python tools/generate_poses.py /path/to/the/checkpoint --output_dir results/ --type yaw
Similarly, you can generate faces with different light directions:
python tools/generate_lighting.py /path/to/the/checkpoint --output_dir results/
We use the code from rosinality's stylegan2-pytorch to compute FID. To compute the FID, you first need to compute the statistics of real images:
python utils/calc_inception.py /path/to/the/dataset/lmdb
You might skip this step if you are using our pre-calculated statistics file (link). Then, to test the FID, you can run:
python tools/test_fid.py /path/to/the/checkpoint --inception /path/to/the/inception/file
Part of this code is based on Wu's Unsup3D and Rosinality's StyleGAN2-Pytorch.