/deeplearning.ai

The notebooks used as a part of several courses taught about LLMs on DeepLearning.ai.

Primary LanguageJupyter NotebookMIT LicenseMIT

Short Courses @ deeplearning.ai

This repository contains notebooks used as a part of several courses taught about LLMs on DeepLearning.ai.

Summary of the Courses

1. ChatGPT Prompt Engineering for Developers

Goal: Going beyond the simple chat box. Using API access to leverage LLMs into own applications, and learn to build a custom chatbot.

  • Learnt prompt engineering best practices for application development
  • Discovered new ways to use LLMs, including how to build own chatbot
  • Gained hands-on practice writing and iterating on prompts using the OpenAI API

2. Building Systems with the ChatGPT API

Goal: Leveling up the use of LLMs. Learning to break down complex tasks, automate workflows, chain LLM calls, and get better outputs.

  • Efficiently built multi-step systems using large language models.
  • Learnt to split complex tasks into a pipeline of subtasks using multistage prompts.
  • Evaluated own LLM inputs and outputs for safety, accuracy, and relevance.

3. LangChain for LLM Application Development

Goal: Langchain is the framework to take LLMs out of the box. Learning to use LangChain to call LLMs into new environments, and use memories, chains, and agents to take on new and complex tasks.

  • Learnt LangChain directly from the creator of the framework, Harrison Chase
  • Applied LLMs to proprietary data to build personal assistants and specialized chatbots
  • Used agents, chained calls, and memories to expand use of LLMs

4. LangChain: Chat with Your Data

Goal: Creating a chatbot to interface with own private data and documents using LangChain.

  • Learnt from LangChain creator, Harrison Chase
  • Utilized 80+ loaders for diverse data sources in LangChain
  • Created a chatbot to interact with own documents and data

5. Finetuning Large Language Models

Goal: Learning to finetune an LLM in minutes and specialize it to use own data

  • Mastered LLM finetuning basics
  • Differentiated finetuning from prompt engineering and know when to use each
  • Gained hands-on experience with real datasets for own projects

6. Large Language Models with Semantic Search

Goal: Learning to use LLMs to enhance search and summarize results.

  • Enhanced keyword search using Cohere Rerank
  • Used embeddings to leverage dense retrieval, a powerful NLP tool
  • Evaluated own effectiveness for further optimization

7. Building Generative AI Applications with Gradio

Goal: Creating and making demo of machine learning applications quickly. Sharing own app with the world on Hugging Face Spaces.

  • Rapidly developed ML apps
  • Created image generation, captioning, and text summarization apps
  • Shared apps with teammates and beta testers on Hugging Face Spaces

8. Evaluating and Debugging Generative AI Models Using Weights and Biases

Goal: Learning MLOps tools for managing, versioning, debugging and experimenting in ML workflow.

  • Learnt to evaluate LLM and image models with platform-independent tools
  • Instrumented training notebooks for tracking, versioning, and logging
  • Monitored and traced LLM behavior in complex interactions over time

9. How Diffusion Models Work

Goal: Learning and building diffusion models from the ground up. Starting with an image of pure noise, and arrive at a final image, learning and building intuition at each step along the way.

  • Understood diffusion models in use today
  • Built own diffusion model, and learn to train it
  • Implemented algorithms to speed up sampling 10x

10. Pair Programming with a Large Language Model

Goal: Learning how to effectively prompt an LLM to help improve, debug, understand, and document code

  • Used LLMs to simplify own code and become a more productive software engineer
  • Reduced technical debt by explaining and documenting a complex existing code base
  • Got free access to the PaLM API for use throughout the course

11. Understanding and Applying Text Embeddings

Goal: Learning how to accelerate the application development process with text embeddings

  • Employed text embeddings for sentence and paragraph meaning
  • Used text embeddings for clustering, classification, and outlier detection
  • Built a question-answering system with Google Cloud’s Vertex AI

12. How Business Thinkers Can Start Building AI Plugins With Semantic Kernel

Goal: Learning Microsoft’s open source orchestrator, Semantic Kernel, and developing business applications using LLMs.

  • Learnt Microsoft’s open-source orchestrator, the Semantic Kernel
  • Developed a business planning and analysis skills while leveraging AI tools
  • Advanced skills in LLMs by using memories, connectors, chains, and more

13. Functions, Tools and Agents with LangChain

Goal: Learning and applying the new capabilities of LLMs as a developer tool.

  • Learnt about the most recent advancements in LLM APIs.
  • Used LangChain Expression Language (LCEL), a new syntax to compose and customize chains and agents faster.
  • Applied these new capabilities by building up a conversational agent.

14. Building and Evaluating Advanced RAG Applications

Goal: Learning how to efficiently bring Retrieval Augmented Generation (RAG) into production by enhancing retrieval techniques and mastering evaluation metrics.

  • Learnt methods like sentence-window retrieval and auto-merging retrieval, improving RAG pipeline’s performance beyond the baseline.
  • Learnt evaluation best practices to streamline process, and iteratively build a robust system.
  • Dived into the RAG triad for evaluating the relevance and truthfulness of an LLM’s response: Context Relevance, Groundedness, and Answer Relevance.

15. Vector Databases: from Embeddings to Applications

Goal: Designing and executing real-world applications of vector databases.

  • Built efficient, practical applications, including hybrid and multilingual searches, for diverse industries.
  • Understood vector databases and used them to develop GenAI applications without needing to train or fine-tune an LLM.
  • Learnt to discern when best to apply a vector database to an application.

16. Reinforcement Learning from Human Feedback

Goal: A conceptual and hands-on introduction to tuning and evaluating large language models (LLMs) using Reinforcement Learning from Human Feedback.

  • Got a conceptual understanding of Reinforcement Learning from Human Feedback (RLHF), as well as the datasets needed for this technique
  • Fine-tuned the Llama 2 model using RLHF with the open source Google Cloud Pipeline Components Library
  • Evaluated tuned model performance against the base model with evaluation methods

17. Quality and Safety for LLM Applications

Goal: Learning how to evaluate the safety and security of a LLM applications and protect against potential risks.

  • Monitored and enhance security measures over time to safeguard LLM applications.
  • Detected and prevented critical security threats like hallucinations, jailbreaks, and data leakage.
  • Explored real-world scenarios to prepare for potential risks and vulnerabilities.

18. Advanced Retrieval for AI with Chroma

Goal: Learning advanced retrieval techniques to improve the relevancy of retrieved results.

  • Learnt to recognize when queries are producing poor results.
  • Learnt to use a large language model (LLM) to improve queries.
  • Learnt to fine-tune embeddings with user feedback.

19. Building Applications with Vector Databases

Goal: Learning to build six applications powered by vector databases: semantic search, retrieval augmented generation (RAG), anomaly detection, hybrid search, image similarity search, and recommender systems, each using a different dataset.

  • Learnt to create six exciting applications of vector databases and implement them using Pinecone.
  • Built a hybrid search app that combines both text and images for improved multimodal search results.
  • Learnt how to build an app that measures and ranks facial similarity.

20. Knowledge Graphs for RAG

Goal: Learning how to build and use knowledge graph systems to improve your retrieval augmented generation applications.

  • Used Neo4j’s query language Cypher to manage and retrieve data stored in knowledge graphs.
  • Wrote knowledge graph queries that find and format text data to provide more relevant context to LLMs for Retrieval Augmented Generation.
  • Built a question-answering system using Neo4j and LangChain to chat with a knowledge graph of structured text documents.

21. Open Source Models with Hugging Face

Goal: Learning how to easily build AI applications using open source models and Hugging Face tools.

  • Found and filtered open source models on Hugging Face Hub based on task, rankings, and memory requirements.
  • Wrote just a few lines of code using the transformers library to perform text, audio, image, and multimodal tasks.
  • Learnt about sharing AI apps with a user-friendly interface or via API and ran them on the cloud using Gradio and Hugging Face Spaces.

22. Prompt Engineering with Llama 2 & 3

Goal: Learning best practices for prompting and selecting among Meta Llama 2 & 3 models.

  • Learnt best practices specific to prompting Llama 2 & 3 models.
  • Interacted with Meta Llama 2 Chat, Code Llama, and Llama Guard models.
  • Learnt to build safe, responsible AI applications using the Llama Guard model.