/DCT-Net

Official implementation of "DCT-Net: Domain-Calibrated Translation for Portrait Stylization", SIGGRAPH 2022 (TOG); Multi-style cartoonization

Primary LanguagePython

DCT-Net: Domain-Calibrated Translation for Portrait Stylization

Project page | Video | Paper

Official implementation of DCT-Net for Portrait Stylization.

DCT-Net: Domain-Calibrated Translation for Portrait Stylization,
Yifang Men1, Yuan Yao1, Miaomiao Cui1, Zhouhui Lian2, Xuansong Xie1,
1DAMO Academy, Alibaba Group, Beijing, China
2Wangxuan Institute of Computer Technology, Peking University, China
In: SIGGRAPH 2022 (TOG) arXiv preprint

Demo

demo_vid

Web Demo

Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

News

(2022-07-07) The paper is available now at arxiv(https://arxiv.org/abs/2207.02426).

(2022-08-08) cartoon function can be directly call from pythonSDK of modelscope.

(2022-08-08) The pertained model and infer code of 'anime' style is available now. More styles coming soon.

Requirements

  • python 3
  • tensorflow (>=1.14)
  • easydict
  • numpy
  • both CPU/GPU are supported

Quick Start

git clone https://github.com/menyifang/DCT-Net.git
cd DCT-Net

From python SDK

A quick use with python SDK

  • Installation:
conda create -n dctnet python=3.8
conda activate dctnet
pip install tensorflow
pip install "modelscope[cv]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
  • Downloads:
python download.py
  • Inference:
python run_sdk.py

From source code

python run.py

Acknowledgments

Face detector and aligner are adapted from Peppa_Pig_Face_Engine and InsightFace.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{men2022dct,
  title={DCT-Net: Domain-Calibrated Translation for Portrait Stylization},
  author={Men, Yifang and Yao, Yuan and Cui, Miaomiao and Lian, Zhouhui and Xie, Xuansong},
  journal={ACM Transactions on Graphics (TOG)},
  volume={41},
  number={4},
  pages={1--9},
  year={2022},
  publisher={ACM New York, NY, USA}
}