/DCT-NET.Pytorch

unofficial implementation of DCT-Net: Domain-Calibrated Translation for Portrait

Primary LanguagePythonMIT LicenseMIT

DCT-NET.Pytorch

unofficial implementation of DCT-Net: Domain-Calibrated Translation for Portrait Stylization.
you can find official version here

show

img video

environment

you can build your environment follow this
pip install tensorboardX for show

how to run

train

download pretrain weights

cd utils
bash download_weight.sh

follow rosinality/stylegan2-pytorch and put 550000.pt in pretrain_models

CCN

  1. prepare the style pictures and align them
    the image path is like this
    style-photos/
    |-- 000000.png
    |-- 000006.png
    |-- 000010.png
    |-- 000011.png
    |-- 000015.png
    |-- 000028.png
    |-- 000039.png

  2. change your own path in ccn_config

  3. train ccn

    # single gpu
    python  train.py \
    --model ccn \
    --batch_size 16 \
    --checkpoint_path checkpoint \
    --lr 0.002 \
    --print_interval 100 \
    --save_interval 100 --dist 
     # multi gpu
    python -m torch.distributed.launch train.py \
    --model ccn \
    --batch_size 16 \
    --checkpoint_path checkpoint \
    --lr 0.002 \
    --print_interval 100 \
    --save_interval 100 

almost 1000 steps, you can stop

TTN

  1. prepare expression information
    you can follow LVT to estimate facial landmark
    cd utils
    python get_face_expression.py \
    --img_base '' # your real image path base,like ffhq \
    --pool_num 2 # multiprocess number \
    --LVT '' # the LVT path you put \
    --train  # train data or val data
  2. prepare your generator image
    cd utils
    python get_tcc_input.py \
    --model_path '' # ccn model path \
    --output_path '' # save path
    select almost 5k~1w good image manually
  3. change your own path in ttn_config
    # like
    self.train_src_root = '/StyleTransform/DATA/ffhq-2w/img'
    self.train_tgt_root = '/StyleTransform/DATA/select-style-gan'
    self.val_src_root = '/StyleTransform/DATA/dmloghq-1k/img'
    self.val_tgt_root = '/StyleTransform/DATA/select-style-gan'
  4. train tnn
    # like ccn single and multi gpus
    python  train.py \
    --model ttn \
    --batch_size 64 \
    --checkpoint_path checkpoint \
    --lr 2e-4 \
    --print_interval 100 \
    --save_interval 100 \
    --dist

inference

you can follow inference.py to put your own ttn model path and image path
python inference.py

Credits

SEAN model and implementation:
https://github.com/ZPdesu/SEAN Copyright © 2020, ZPdesu.
License https://github.com/ZPdesu/SEAN/blob/master/LICENSE.md

stylegan2-pytorch model and implementation:
https://github.com/rosinality/stylegan2-pytorch Copyright © 2019, rosinality.
License https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE

White-box-Cartoonization model and implementation:
https://github.com/SystemErrorWang/White-box-Cartoonization Copyright © 2020, SystemErrorWang.

White-box-Cartoonization model pytorch model and implementation:
https://github.com/vinesmsuic/White-box-Cartoonization-PyTorch Copyright © 2022, vinesmsuic.
License https://github.com/vinesmsuic/White-box-Cartoonization-PyTorch/blob/main/LICENSE

arcface pytorch model pytorch model and implementation:
https://github.com/ronghuaiyang/arcface-pytorch Copyright © 2018, ronghuaiyang.