This is an easy implementation based on the repository (https://github.com/dome272/Diffusion-Models-pytorch). The Diffusion model is based on DDPM paper, and the conditioning idea is taken from Classifier-Free Diffusion Guidance.
- Configure Hyperparameters in
main.py
- Set dataset usage in
utils.py
python main.py
The generate.py
file shows how to sample images using the model's saved checkpoints in "models/DDPM_conditional".
python generate.py
I just used a CPU to train the model for 6 epochs, and got the following results for 2 generated samples:
Compared to the target image:
The results definetely can be improved with long training and tuning time.
To quantitatively evaluated the generated results, some metrics can be used, such as FID, CLIP. Due to the time limit, the FID and CLIP metrics are not implemented yet.