/CLIP-Guided-Diffusion

This is a collection of resources related to generating art with CLIP Guided Diffusion and related technologies

Primary LanguageJupyter Notebook

CLIP-Guided-Diffusion

This is a collection of resources related to generating art with CLIP Guided Diffusion

Colab notebooks that use CLIP Guided Diffusion:

Note, these are based on Katherine Crowson's original notebook, which circulated on twitter and took on various branches

Disco Diffusion major contributers include: Somnai, Gandamu, others [todo, fill this in]

JAX v2.7 Contributers include: Nshepperd, Huemin

Academic Papers:

Denoising Diffusion Probabilistic Models Paper [June 2020] by Jonathan Ho, Ajay Jain, Pieter Abbeel

Learning Transferable Visual Models From Natural Language Supervision [February 2021] by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever (OpenAI)

Diffusion Models beats GANs on Image Synthesis [May 2021] by Prafulla Dhariwal, Alex Nichol

Hierarchical Text-Conditional Image Generation with CLIP Latents [April 2022] by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen

  • 'DALL-E 2' paper from OpenAI

GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models [March 2022] by Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, Mark Chen

High-Resolution Image Synthesis with Latent Diffusion Models [Dec 2021] by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer

  • 'Latent Diffusion' paper
  • Used by Stable Diffusion
  • Also useful for understanding Centipede Diffusion