This repository contains code examples for SageMaker Jumpstart Generative AI, a tutorial series designed to help users get started with generative AI using Python and PyTorch.
Follow the instructions given by the workshop administrators on how to log in to the AWS account provided for this workshop. Do NOT use your personal or business account to run this workshop, as the required pre-built resources will not be available and there will be a cost for the compute required to run the Generative Ai models.
Go to: https://dashboard.eventengine.run/login . You will be redirected to the page below.
Enter the event hash you have received from your instructor.
Click on Email One-Time Password (OTP).
You are redirected to the following page:
Enter your email address and click on Send passcode.
You are redirected to the following page:
Check your mailbox, copy-paste the one-time password and click on Sign in.
You are redirected to the Event Dashboard. Click on "Open AWS Console".
You are then redirected to the AWS Console.
Search for SageMaker from the AWS Services search menu and select it
This will open the SageMaker console, select "Domains" from the left hand menu
Click on the domain called "MyDomain"
You will see a user called "sagemakeruser" from the "Launch" menu select "Studio"
This will load the SageMaker Studio. It will take about 5-10 mins to prepare the Studio envrionment the first time you run it.
Amazon SageMaker Studio is a web-based, integrated development environment (IDE) for machine learning that lets you build, train, debug, deploy, and monitor your machine learning models. Studio provides all the tools you need to take your models from experimentation to production while boosting your productivity.
If the AWS Account has been provisioned by your AWS Instructor, follow the next steps to access the SageMaker Studio environment:
You will be redirected to a new web tab that looks like this:
Click on Open Launcher, and then under the "Utilities and Files" section click on "System Terminal"
We are going to download some files from out Git repoistory, so that we have the Phython scripts avliable for us to execute within our SageMaker envrionment.
To do this run the following command witin you terminal
git clone https://github.com/Tampuri/GenAi
This will take a few seconds to pull down our files. Once it has completed click on the "File Browser" button the lefthand Quick menu
You should now see a folder called "GenAi". If you open this you will see the files we will be using for our labs.This repository showcases the Stable Diffusion technique, a powerful generative modeling technique that allows for the creation of high-quality images from small datasets. The repository consists of three Jupyter notebooks.
The first notebook, _Lab 1 - Text to Image.ipynb
, demonstrates how to easily create a SageMaker endpoint for the pre-trained Stable Diffusion model and generate images based on user text prompts. Users can input fun scenarios and prompts to generate various images, such as these cats.
The second notebook, `_Lab 2- Text to Image Inpainting.ipynb`, showcases the process of taking an exsisting image and inpainting. Inpainting in Generative AI images is the process of filling in missing or corrupted parts of an image. This will take an original image of a dog and inpaint it with that of a cat.
By following the steps outlined in the notebook, you can collect a few images of your chosen entity from Google Images and utilize the fine-tuning process to train the Stable Diffusion model to create new and unique compositions. This approach offers a broad range of creative possibilities, allowing you to experiment with various scenarios and unleash your imagination.
This repository provides users with a powerful tool for generating high-quality images, even with limited datasets. The Stable Diffusion technique offers a versatile and efficient way to create customized and imaginative images.
This module focuses on utilizing the FLAN-T5-XL Large Language Model (LLM) to achieve N-shot learning via in-context learning. This involves leveraging the model's natural language understanding (NLU) capabilities to personalize virtual assistant responses and improve their performance for users.
In this module, you will learn step-by-step how to perform NLU tasks using FLAN-T5-XL. Specifically, you will learn how to read and understand multi-turn customer support chat transcripts, and engineer prompts that enable FLAN-T5-XL to learn in-context and improve its performance in N-shot learning. This will enhance the model's ability to infer context and answer questions derived from the chat transcripts.
Overall, this module provides an excellent opportunity to explore the capabilities of FLAN-T5-XL in solving NLU tasks, such as text summarization, abstractive question answering, sentiment analysis, and sentiment phrase extraction.
This module contains a notebook '_Lab 1 - Finetune a Stable Diffusion Model; showcases the process of fine-tuning the Stable Diffusion model with a small set of images. This approach involves using images of cats from specific breeds or your own pet cats to teach the model how to recreate these images and incorporate them into various creative scenarios. This technique can be adapted to work with any set of images containing fewer than ten examples, such as images of pet dogs or other entities.
Each module has its own subdirectory containing code examples and instructions for use. Simply navigate to the module you are interested in and follow the instructions in the README file.
This workshop uses : Workshop ID : 80ae1ed2-f415-4d3d-9eb0-e9118c147bd4 Repository name : implementing-generative-ai-on-aws