This repository contains the code for our paper Dual-Attention GAN for Large-Pose Face Frontalization (FG2020).
The code is tested on:
- Python 3.6+
- Pytorch 0.4.1
We provide training and testing code for MultiPIE. Faces are first cropped and save to the folder with the structure as follows:
- MultiPIE/cropped
- gallery
- front
- pose
- mask_hair_ele_face
As for the face parser model, we use face-parsing.PyTorch. 3 segments (i.e., hair, keypoints, and face) are generated and saved in the folder of mask_hair_ele_face
.
The original CAS-PEAL-R1 dataset could be found at: CAS-PEAL-R1. The cropped one could be downloaded from here.
Change data directory in option.py.
For training, run
python main.py --save_results --save_gt --save_models
.
We include an identity loss in our code, which is refered to LightCNN. The pretrained LightCNN model can be downloaded from https://drive.google.com/file/d/1Jn6aXtQ84WY-7J3Tpr2_j6sX0ch9yucS/view. After downloading, save the model under the folder src
.
For testing, run
python main.py --save test_folder --test_only --save_results --save_gt --pre_train ../experiment/model.pth
.
As for the face parser model, we use https://github.com/zllrunning/face-parsing.PyTorch.
Please cite this paper in your publications if it helps your research:
@article{yin2020dualattention, title={Dual-Attention GAN for Large-Pose Face Frontalization}, author={Yu Yin and Songyao Jiang and Joseph P. Robinson and Yun Fu}, year={2020}, eprint={2002.07227}, archivePrefix={arXiv}, primaryClass={cs.CV} }
we refer to EDSR for the framework of training code. The self-attention module is brought from SAGAN. The identity loss module is brought from LightCNN.