Deep Preset: Blending and Retouching Photos with Color Style Transfer (WACV'2021)
[Page] [Paper] [SupDoc] [SupVid] [5-min Presentation] [Slides]
Man M. Ho, Jinjia Zhou
Prerequisites
- Ubuntu 16.04
- Pillow
- PyTorch >= 1.1.0
- Numpy
- gdown (for fetching pretrained models)
Get Started
1. Clone this repo
git clone https://github.com/minhmanho/deep_preset.git
cd deep_preset
2. Fetch our trained model
Positive Pair-wise Loss (PPL) could improve Deep Preset in directly stylizing photos; however, it became worse in predicting preset, as described in our paper. Therefore, depending on your needs, please download Deep Preset with PPL for directly stylizing photos
./models/fetch_model_wPPL.sh
Or Deep Preset without PPL for preset prediction.
./models/fetch_model_woPPL.sh
Blending and Retouching Photos
Run our Deep Preset to stylize photos as:
CUDA_VISIBLE_DEVICES=0 python run.py \
--content ./data/content/ \
--style ./data/style/ \
--out ./data/out/ \
--ckpt ./models/dp_wPPL.pth.tar \
--size 512x512
Where --size
is for the photo size [Width]x[Height], which should be divisible by 16.
Besides, --size
set as 352x352
will activate the preset prediction.
In case of only preset prediction needed, please add --p
as:
CUDA_VISIBLE_DEVICES=0 python run.py \
--content ./data/content/ \
--style ./data/style/ \
--out ./data/out/ \
--ckpt ./models/dp_woPPL.pth.tar \
--p
After processing, the predicted preset will be stored as a JSON file revealing how Lightroom settings are adjusted, as follows:
{
"Highlights2012": -23,
"Shadows2012": 4,
"BlueHue": -8,
"Sharpness": 19,
"Clarity2012": -2
...
}
Cosplay Portraits
Photos were taken by Do Khang (taking the subject in the top-left one) and the first author (others).
Citation
If you find this work useful, please consider citing:
@InProceedings{Ho_2021_WACV,
author = {Ho, Man M. and Zhou, Jinjia},
title = {Deep Preset: Blending and Retouching Photos With Color Style Transfer},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2021},
pages = {2113-2121}
}
Acknowledgements
We would like to thank:
-
Do Khang for the photos of Duyen To.
-
digantamisra98 for the Unofficial PyTorch Implementation of EvoNorm
Liu, Hanxiao, Andrew Brock, Karen Simonyan, and Quoc V. Le. "Evolving Normalization-Activation Layers."
arXiv preprint arXiv:2004.02967 (2020).
- and Richard Zhang for the BlurPool.
Zhang, Richard. "Making convolutional networks shift-invariant again."
ICML (2019).
License
Our code and trained models are for non-commercial uses and research purposes only.
Contact
If you have any questions, feel free to contact me (maintainer) at manminhho.cs@gmail.com