We base our codebase on the codebase on a pre-existing one given below. The codbase here supports CIFAR10, MNIST but can be modified easily for any standard torchvision datasets that are downloadable. For intrinsically grayscale datasets eg FashionMNIST please refer to the MNIST preprocessing transform in any of the codes. Currently codes are provided for.
Mitigation methods
- Reweighting temporal freqeuncy
- MSE Loss in the frequency domain with reweighting.
- MSE Loss in the frequency domain with n-step sampling + reweighting
- MSE Loss in the frequency domain with n-step sampling (direct formula) + reweighting
Reweighting involves sampling during training from a bernoulli distribution that assigns equal probability to
Our implementation of the spectrum loss borrows directly from https://github.com/autonomousvision/frequency_bias
All mitigation codes are present in diffusion.py please refer to it for more details. For a fuller theoretical analysis please refer to the paper.
Unofficial PyTorch implementation of Denoising Diffusion Probabilistic Models [1].
This implementation follows the most of details in official TensorFlow implementation [2]. I use PyTorch coding style to port [2] to PyTorch and hope that anyone who is familiar with PyTorch can easily understand every implementation details.
- Datasets
- Support CIFAR10
- Support LSUN
- Support CelebA-HQ
- Featurex
- Gradient accumulation
- Multi-GPU training
- Reproducing Experiment
- CIFAR10
-
Python 3.6
-
Packages Upgrade pip for installing latest tensorboard
pip install -U pip setuptools pip install -r requirements.txt
-
Download precalculated statistic for dataset:
Create folder
stats
forcifar10.train.npz
.stats └── cifar10.train.npz
- Take CIFAR10 for example:
python main.py --train \ --flagfile ./config/CIFAR10.txt
- [Optional] Overwrite arguments
python main.py --train \ --flagfile ./config/CIFAR10.txt \ --batch_size 64 \ --logdir ./path/to/logdir
- [Optional] Select GPU IDs
CUDA_VISIBLE_DEVICES=1 python main.py --train \ --flagfile ./config/CIFAR10.txt
- [Optional] Multi-GPU training
CUDA_VISIBLE_DEVICES=0,1,2,3 python main.py --train \ --flagfile ./config/CIFAR10.txt \ --parallel
- A
flagfile.txt
is autosaved to your log directory. The default logdir forconfig/CIFAR10.txt
is./logs/DDPM_CIFAR10_EPS
- Start evaluation
python main.py \ --flagfile ./logs/DDPM_CIFAR10_EPS/flagfile.txt \ --notrain \ --eval
- [Optional] Multi-GPU evaluation
CUDA_VISIBLE_DEVICES=0,1,2,3 python main.py \ --flagfile ./logs/DDPM_CIFAR10_EPS/flagfile.txt \ --notrain \ --eval \ --parallel
The checkpoint can be downloaded from my drive.