This guide is primarily for technical teams engaged in developing a basic conversational AI with RAG solutions. It offers a basic introduction to the technical aspects. This guide helps anyone with basic technical background to get involved in the AI domain. This guide combines between the theoretical, basic knowledge and code implementation. It's important to note that most of the content is compiled from various online resources, reflecting the extensive effort in curating and organizing this information from numerous sources.
- intro
- What is Conversational AI?
- The Technology Behind Conversational AI
- LLM Basics
- What is a large language model (LLM)?
- How do LLMs work?
- What are the Relations and Differences between LLMs and Transformers?
- What are Pipelines in Transformers?
- What are Hugging Face Transformers?
- Chains
- What are chains?
- Foundational chain types in LangChain
- LLMChain
- Creating an LLMChain
- Sequential Chains
- SimpleSequentialChain
- SequentialChain
- Transformation
- Prompt Engineering
- What is Prompt Engineering?
- Embeddings
- Vector Stores
- Chunking
- Quantization
- What is Quantization?
- How does quantization work?
- Hugging Face and Bitsandbytes Uses
- Loading a Model in 4-bit Quantization
- Loading a Model in 8-bit Quantization
- Changing the Compute Data Type
- Using NF4 Data Type
- Nested Quantization for Memory Efficiency
- Loading a Quantized Model from the Hub
- Exploring Advanced techniques and configuration
- Temperature
- Langchain Memory
- Agents & Tools
- Walkthrough — Project Utilizing Langchain
- RAG
- groq
- What is LlamaParse ?
- Use Case – 1
- Use Case – 2
- Source Code