/GPEN

Primary LanguageJupyter Notebook

GAN Prior Embedded Network for Blind Face Restoration in the Wild

Paper | Supplementary | Demo

Hugging Face Spaces

Tao Yang1, Peiran Ren1, Xuansong Xie1, Lei Zhang1,2
1DAMO Academy, Alibaba Group, Hangzhou, China
2Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China

Face Restoration

Face Colorization

Face Inpainting

Conditional Image Synthesis (Seg2Face)

News

(2021-12-29) Add online demos Hugging Face Spaces. Many thanks to CJWBW and AK391.

(2021-12-16) More models will be released including one-to-many FSRs. Stay tuned.

(2021-12-16) Release a simplified training code of GPEN. It differs from our implementation in the paper, but could achieve comparable performance. We strongly recommend to change the degradation model.

(2021-12-09) Add face parsing to better paste restored faces back.

(2021-12-09) GPEN can run on CPU now by simply discarding --use_cuda.

(2021-12-01) GPEN can now work on a Windows machine without compiling cuda codes. Please check it out. Thanks to Animadversio. Alternatively, you can try GPEN-Windows. Many thanks to Cioscos.

(2021-10-22) GPEN can now work with SR methods. A SR model trained by myself is provided. Replace it with your own model if necessary.

(2021-10-11) The Colab demo for GPEN is available now google colab logo.

Usage

python pytorch cuda driver gcc

  • Clone this repository:
git clone https://github.com/yangxy/GPEN.git
cd GPEN
python face_enhancement.py --model GPEN-BFR-512 --size 512 --channel_multiplier 2 --narrow 1 --use_sr --use_cuda --indir examples/imgs --outdir examples/outs-BFR
  • Colorize faces:
python face_colorization.py
  • Complete faces:
python face_inpainting.py
  • Synthesize faces:
python segmentation2face.py
  • Train GPEN for BFR with 4 GPUs:
CUDA_VISIBLE_DEVICES='0,1,2,3' python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train_simple.py --size 1024 --channel_multiplier 2 --narrow 1 --ckpt weights --sample results --batch 2 --path your_path_of_croped+aligned_hq_faces (e.g., FFHQ)

When testing your own model, set --key g_ema.

Main idea

Citation

If our work is useful for your research, please consider citing:

@inproceedings{Yang2021GPEN,
    title={GAN Prior Embedded Network for Blind Face Restoration in the Wild},
    author={Tao Yang, Peiran Ren, Xuansong Xie, and Lei Zhang},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2021}
}

License

© Alibaba, 2021. For academic and non-commercial use only.

Acknowledgments

We borrow some codes from Pytorch_Retinaface, stylegan2-pytorch, Real-ESRGAN, and GFPGAN.

Contact

If you have any questions or suggestions about this paper, feel free to reach me at yangtao9009@gmail.com.