/One2one-unpaired

One to one mapping for unpaired image-to-image translation

Primary LanguagePythonOtherNOASSERTION




One-to-one Mapping for Unpaired Image-to-image Translation in PyTorch

We provide PyTorch implementations for One-to-one Mapping for Unpaired Image-to-image Translations.

The code was written by Zengming Shen. This PyTorch implementation produces results comparable to or better than the baseline of CycleGAN. If you would like to reproduce the same results and cmpare it with the baseline as shown in the papers, check out the baseline CycleGAN Torch code.

Note: The current software works well with PyTorch 0.4+. Check out the older branch that supports PyTorch 0.1-0.3.

You may find useful information in training/test tips and frequently asked questions. To implement custom models and datasets, check out our templates. To help users better understand and adapt our codebase, we provide an overview of the code structure of this repository.

One2one unpaired: Conference presentation | Paper | Torch

Course

CycleGAN course assignment code and handout designed by Prof. Roger Grosse for CSC321 "Intro to Neural Networks and Machine Learning" at University of Toronto. Please contact the instructor if you would like to adopt it in your course.

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/ALISCIFP/One2one-unpaired
cd One2one-unpaired
  • Install [PyTorch](http://pytorch.org and) 0.4+ and other dependencies (e.g., torchvision, visdom and dominate).
    • For pip users, please type the command pip install -r requirements.txt.
    • For Conda users, we provide a installation script ./scripts/conda_deps.sh. Alternatively, you can create a new Conda environment using conda env create -f environment.yml.
    • For Docker users, we provide the pre-built Docker image and Dockerfile. Please refer to our Docker page.

CycleGAN train/test

  • Download a CycleGAN dataset (e.g. maps):
bash ./datasets/download_cyclegan_dataset.sh maps
  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097.
  • Train a model:
#!./scripts/train.sh

To see more intermediate results, check out ./checkpoints/maps_cyclegan/web/index.html.

  • Test the model:
#!./scripts/test.sh
  • The test results will be saved to a html file here: ./results/maps_cyclegan/latest_test/index.html.

We provide the pre-built Docker image and Dockerfile that can run this code repo. See docker.

Download pix2pix/CycleGAN datasets and create your own datasets.

Best practice for training and testing your models.

Before you post a new question, please first look at the above Q & A and existing GitHub issues.

Custom Model and Dataset

If you plan to implement custom models and dataset for your new applications, we provide a dataset template and a model template as a starting point.

To help users better understand and use our code, we briefly overview the functionality and implementation of each package and each module.

Pull Request

You are always welcome to contribute to this repository by sending a pull request. Please run flake8 --ignore E501 . and python ./scripts/test_before_push.py before you commit the code. Please also update the code structure overview accordingly if you add or remove files.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{shen2020one,
  title={One-to-one Mapping for Unpaired Image-to-image Translation},
  author={Shen, Zengming and Zhou, S Kevin and Chen, Yifan and Georgescu, Bogdan and Liu, Xuqi and Huang, Thomas},
  booktitle={The IEEE Winter Conference on Applications of Computer Vision},
  pages={1170--1179},
  year={2020}
}
@inproceedings{shen2020learning,
  title={Learning A Self-Inverse Network for Bidirectional Mri Image Synthesis},
  author={Shen, A Zengming and Chen, B Yifan and Zhou, C Kevin S and Georgescu, D Bogdan and Liu, E Xuqi and Huang, F Thomas S},
  booktitle={2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)},
  pages={1765--1769},
  year={2020},
  organization={IEEE}
}

i

Related Projects

CycleGAN-Torch | pix2pix-Torch | pix2pixHD | iGAN | BicycleGAN | vid2vid

Cat Paper Collection

If you love cats, and love reading cool graphics, vision, and learning papers, please check out the Cat Paper Collection.

Acknowledgments

Our code is inspired by pytorch-CycleGAN-and-pix2pix.