This workshop shows AWS users how to use Amazon SageMaker and other associated services to build, train, and deploy generative AI models. These labs go through data science topics such as data processing at scale, model fine-tuning, real-time model deployment, and MLOps practices all through a generative AI lens.
In this workflow, we will use the Amazon Customer Reviews Dataset for labs related to data processing as it contains a very large corpus of ~150 million customer reviews. This is useful for showcasing SageMaker's distributed processing abilities which can be extended to many large datasets.
After the data processing sections, we will build our FLAN-T5 based NLP model using the dialogsum dataset from HuggingFace which contains ~15k examples of dialogue with associated summarizations.
[HIDDEN] 2. Register parquet data in S3 using AWS Glue and Amazon Athena [HIDDEN] 3. Visualize data with serverless distributed PySpark on SageMaker notebooks using Glue interactive sessions [HIDDEN] 4. Analyze data quality with distributed PySpark on SageMaker Processing Jobs
- Analyze the impact of prompt engineering using a HuggingFace model
- Perform feature engineering on a raw text dataset using HuggingFace
- Fine-tune a HuggingFace model for dialogue summarization
- Create an automated end-to-end ML MLOps workflow with SageMaker Pipelines
- Deploy a fine-tuned generative AI model to a real-time SageMaker Endpoint 10.Run inference on a SageMaker Endpoint in real time
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.