Code used for TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up.
- checkpoint gradient using torch.utils.checkpoint
- 16bit precision training
- Distributed Training (Faster!)
- IS/FID Evaluation
- Gradient Accumulation
README waits for updated
Codebase from AutoGAN, pytorch-image-models
if you find this repo is helpful, please cite
@article{jiang2021transgan,
title={TransGAN: Two Transformers Can Make One Strong GAN},
author={Jiang, Yifan and Chang, Shiyu and Wang, Zhangyang},
journal={arXiv preprint arXiv:2102.07074},
year={2021}
}