This is the code repository for Building LLM Powered Application, Published by Packt.
Create intelligent apps and agents with large language models
The book provides a solid theoretical foundation of what LLMs are, their architecture. With a hands-on approach we provide readers with a step-by-step guide to implementing LLM-powered apps for specific tasks and using powerful frameworks like LangChain.
- Explore the core components of LLM architecture, including encoder-decoder blocks and embeddings
- Understand the unique features of LLMs like GPT-3.5/4, Llama 2, and Falcon LLM
- Use AI orchestrators like LangChain, with Streamlit for the frontend
- Get familiar with LLM components such as memory, prompts, and tools
- Learn how to use non-parametric knowledge and vector databases
- Understand the implications of LFMs for AI research and industry applications
- Customize your LLMs with fine tuning
- Learn about the ethical implications of LLM-powered applications
- Introduction to Large Language Models
- LLMs for AI-Powered Applications
- Choosing an LLM for Your Application
- Prompt Engineering
- Embedding LLMs within Your Applications
- Building Conversational Applications
- Search and Recommendation Engines with LLMs
- Using LLMs with Structured Data
- Working with Code
- Building Multimodal Applications with LLMs
- Fine-Tuning Large Language Models
- Responsible AI
- Emerging Trends and Innovations
You can run the notebooks directly from the table below:
If you feel this book is for you, get your copy today!
With the following software and hardware list you can run all code files present in the book.
Chapter | Software required | Link to the software | Hardware specifications | OS required |
---|---|---|---|---|
4-11 | Python | Download | Suitable | Windows/Linux/MacOS |
- Page 8, Chapter 1 : P(“table”), P(“chain”), and P(“roof”) are the prior probabilities for each candidate word, based on the language model’s knowledge of the frequency of these words in the training data. Correction: P(“table”), P(“chair”), and P(“roof”) are the prior probabilities for each candidate word, based on the language model’s knowledge of the frequency of these words in the training data.
You can get more engaged on the discord server for more latest updates and discussions in the community at Discord
If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost. Simply click on the link to claim your free PDF. Free-PDF
We also provide a PDF file that has color images of the screenshots/diagrams used in this book at Color Images
After completing her bachelor's degree in finance, Valentina Alto pursued a master's degree in data science in 2021. She began her professional career at Microsoft as an Azure Solution Specialist, and since 2022, she has been primarily focused on working with Data & AI solutions in the Manufacturing and Pharmaceutical industries. Valentina collaborates closely with system integrators on customer projects, with a particular emphasis on deploying cloud architectures that incorporate modern data platforms, data mesh frameworks, and applications of Machine Learning and Artificial Intelligence.