/Automated-Testing-for-LLMOps

Create a continuous integration (CI) workflow for testing LLMs applications in an effective way.

Primary LanguageJupyter Notebook

๐Ÿ’ป Welcome to the "Automated Testing for LLMOps" course! Instructed by Rob Zuber, CTO at CircleCI, this course will teach you how to create a continuous integration (CI) workflow for evaluating your Large Language Model (LLM) applications at every change, enabling faster, safer, and more efficient application development.

Course Website: ๐Ÿ“šdeeplearning.ai

Course Summary

In this course, you will learn the importance of systematic testing in LLM application development and how to implement a continuous integration workflow to catch issues early. Here's what you can expect to learn and experience:

  1. ๐Ÿ“‹ Robust LLM Evaluations: Write robust evaluations covering common problems like hallucinations, data drift, and harmful or offensive output.

  1. โš™๏ธ Continuous Integration Workflow: Build a CI workflow to automatically evaluate every change to your LLM application.

  1. ๐Ÿ”„ Orchestrating CI Workflow: Orchestrate your CI workflow to run specific evaluations at different stages of development.

Key Points

  • ๐Ÿงช Learn how LLM-based testing differs from traditional software testing and implement rules-based testing to assess your LLM application.
  • ๐Ÿ“ Build model-graded evaluations to test your LLM application using an evaluation LLM.
  • ๐Ÿ”„ Automate your evaluations (rules-based and model-graded) using continuous integration tools from CircleCI.

About the Instructor

๐ŸŒŸ Rob Zuber is the CTO at CircleCI, bringing extensive expertise in software development and continuous integration to guide you through automating testing for LLMOps.

๐Ÿ”— To enroll in the course or for further information, visit deeplearning.ai.