/GANs101_ODSC_GenerativeAI_2023

Generative Adversarial Networks 101 Tutorial for ODSC Generative AI Summit 2023

Primary LanguageJupyter NotebookMIT LicenseMIT

Generative Adversarial Networks 101

Generative models are at the heart of DeepFakes, and can be used to synthesize, replace, or swap attributes of images. Learn the basics of Generative Adversarial Networks, the famous GANs, from the ground up: autoencoders, latent spaces, generators, discriminators, GANs, DCGANs, and WGANs.

The main goal of this sessions is to show you how GANs work: we will start with a simple example using synthetic data (not generated by GANs) to learn about latent spaces and how to use them to generate more synthetic data (using GANs to generate them). We will improve on the model's architecture, incorporating convolutional layers (DCGAN), and different loss functions (WGAN, WGAN-GP).

Module 1: Latent spaces and autoencoders Learn how autoencoders use latent spaces to represent data.

Module 2: Your first GAN Learn how decoders can be used as Generators, generating images from sampling latent spaces, and how to combine them with Discriminators to build your first GAN.

Bonus: Improving your GAN using Wasserstein distance (WGAN and WGAN-GP) Learn how to improve your GAN by changing its loss function and adding gradient penalty (GP).

We will use Google Colab and work our way together into building and training several GANs. You should be comfortable using Jupyter notebooks and Numpy, and training simple models in PyTorch.

Open it in Google Colab GAN.ipynb.