Code for the paper "Image-to-image translation for cross-domain disentanglement", NeurIPS 2018.
Based on this pix2pix implementation by Christopher Hesse, extensively explained in this article.
Please follow the setup described here. Tested with Tensorflow 1.8.0.
See DATA/MNISTCDCB/ for example images of our MNIST-CD/CB dataset.
In order to train a MODEL using DATA, run
python run_cross_domain_disen.py \
--mode train \
--output_dir checkpoints/MODEL \
--input_dir DATA/train/
Once the model finished training, it can be tested by running
python run_cross_domain_disen.py \
--mode test \
--output_dir test/MODEL \
--checkpoint checkpoints/MODEL \
--input_dir DATA/test/
In order to extract disentangled features for other tasks (e.g. cross-domain retrieval), run
python run_cross_domain_disen.py \
--mode features \
--output_dir features/MODEL \
--checkpoint checkpoints/MODEL \
--input_dir DATA/test/
Please, cite the following paper if you use this code:
@inproceedings{gonzalez-garcia2018NeurIPS,
title={Image-to-image translation for cross-domain disentanglement},
author={Gonzalez-Garcia, Abel and van de Weijer, Joost and Bengio, Yoshua},
booktile={NeurIPS},
year={2018}
}