This repository contains an op-for-op PyTorch reimplementation of Generative Adversarial Networks.
- Google Driver
- Baidu Driver access:
llot
Modify the contents of the file as follows.
config.py
line 35mode="train"
change tomodel="valid"
;config.py
line 79model_path=f"results/{exp_name}/g-last.pth"
change tomodel_path=f"<YOUR-WEIGHTS-PATH>.pth"
;- Run
python validate.py
.
Modify the contents of the file as follows.
config.py
line 35mode="valid"
change tomodel="train"
;- Run
python train.py
.
If you want to load weights that you've trained before, modify the contents of the file as follows.
config.py
line 35mode="valid"
change tomodel="train"
;config.py
line 51start_epoch=0
change tostart_epoch=XXX
;config.py
line 52resume=False
change toresume=True
;config.py
line 53resume_d_weight=""
change toresume_d_weight=<YOUR-RESUME-D-WIGHTS-PATH>
;config.py
line 54resume_g_weight=""
change toresume_g_weight=<YOUR-RESUME-G-WIGHTS-PATH>
;- Run
python train.py
.
If you find a bug, create a GitHub issue, or even better, submit a pull request. Similarly, if you have questions, simply post them as GitHub issues.
I look forward to seeing what the community does with these models!
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
Abstract
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train
two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the
probability that a sample came from the training data rather than G. The training procedure for G is to maximize the
probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary
functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2
everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with
backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either
training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and
quantitative evaluation of the generated samples.
[Paper] [Authors' Implementation]
@article{adversarial,
title={Generative Adversarial Networks},
author={Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio},
journal={nips},
year={2014}
}