/ContextAgent

AI assistant backend for document-based question answering using RAG (LangChain, OpenAI, FastAPI, ChromaDB). Features modular architecture, multi-tool agents, conversational memory, semantic search, PDF/Docx/Markdown processing, and production-ready deployment with Docker.

Primary LanguagePython

๐Ÿค– ContextAgent

A modular, production-ready AI assistant backend built with Python, LangChain, OpenAI API, and RAG (Retrieval-Augmented Generation) pipeline.

โœจ Features

  • ๐Ÿ” RAG Pipeline: Embed documents and perform similarity search for context retrieval
  • ๐Ÿค– LangChain Agent: Chain of tools including Calculator and Google Search
  • ๐Ÿ’ฌ Conversational Memory: Maintains conversation history using LangChain's ConversationBufferMemory
  • ๐Ÿงพ Document Ingestion: Support for PDFs, TXT, Markdown, and DOCX files
  • ๐Ÿ› ๏ธ Embeddings: OpenAI embeddings for document vectorization
  • ๐Ÿ—‚๏ธ Vector Store: ChromaDB for fast document retrieval
  • ๐Ÿ”‘ Environment-based Configuration: Secure API key management
  • ๐Ÿ“„ Swagger Documentation: Auto-generated API docs
  • ๐Ÿš€ FastAPI Backend: High-performance async API

๐Ÿ—๏ธ Architecture

ContextAgent/
โ”œโ”€โ”€ app/
โ”‚   โ”œโ”€โ”€ main.py                # FastAPI app entrypoint
โ”‚   โ”œโ”€โ”€ routes/
โ”‚   โ”‚   โ”œโ”€โ”€ chat.py            # Chat endpoints
โ”‚   โ”‚   โ””โ”€โ”€ ingest.py          # Document upload endpoints
โ”‚   โ”œโ”€โ”€ chains/
โ”‚   โ”‚   โ”œโ”€โ”€ qa_chain.py        # RAG + LLM chain
โ”‚   โ”‚   โ””โ”€โ”€ agent_chain.py     # LangChain agent setup
โ”‚   โ”œโ”€โ”€ tools/
โ”‚   โ”‚   โ”œโ”€โ”€ calculator.py      # Custom LangChain tools
โ”‚   โ”‚   โ””โ”€โ”€ google_search.py   # Web search tool
โ”‚   โ”œโ”€โ”€ memory/
โ”‚   โ”‚   โ””โ”€โ”€ session_memory.py  # Conversational memory
โ”‚   โ”œโ”€โ”€ ingest/
โ”‚   โ”‚   โ”œโ”€โ”€ embedder.py        # Embedding function
โ”‚   โ”‚   โ””โ”€โ”€ vector_store.py    # ChromaDB setup
โ”‚   โ”œโ”€โ”€ utils/
โ”‚   โ”‚   โ”œโ”€โ”€ config.py          # Environment + settings
โ”‚   โ”‚   โ””โ”€โ”€ document_loader.py # Document processing
โ”‚   โ””โ”€โ”€ schemas/
โ”‚       โ””โ”€โ”€ request_model.py   # Pydantic schemas
โ”œโ”€โ”€ .env.example               # Environment variables template
โ”œโ”€โ”€ requirements.txt           # Python dependencies
โ””โ”€โ”€ README.md                 # This file

๐Ÿš€ Quick Start

1. Clone and Setup

git clone git https://github.com:webcodelabb/ContextAgent.git
cd ContextAgent

2. Install Dependencies

pip install -r requirements.txt

3. Configure Environment

Copy the example environment file and add your API keys:

cp env.example .env

Edit .env and add your OpenAI API key:

OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4

4. Run the Application

python -m app.main

The server will start at http://localhost:8000

๐Ÿ“š API Documentation

Chat Endpoints

POST /chat/

Ask questions to the AI assistant.

Request Body:

{
  "question": "What does this PDF say about climate change?",
  "history": [
    {"role": "user", "content": "Summarize the document"},
    {"role": "agent", "content": "Sure, here's the summary..."}
  ],
  "use_rag": true,
  "use_agent": false
}

Response:

{
  "answer": "The PDF discusses the recent changes in global temperatures and the effects of greenhouse gases...",
  "sources": ["climate_report_2024.pdf"],
  "reasoning": null,
  "metadata": {
    "model": "gpt-4",
    "session_id": "default",
    "documents_retrieved": 3
  }
}

GET /chat/memory/{session_id}

Get conversation history for a session.

DELETE /chat/memory/{session_id}

Clear conversation memory for a session.

GET /chat/tools

Get information about available tools.

GET /chat/stats

Get statistics about the chat system.

Document Ingestion Endpoints

POST /ingest/upload

Upload a document for processing.

Supported formats: PDF, TXT, MD, DOCX

POST /ingest/directory

Ingest all supported documents from a directory.

GET /ingest/stats

Get statistics about ingested documents.

DELETE /ingest/clear

Clear all ingested documents.

Health Check

GET / or GET /health

Check system health and configuration.

๐Ÿ”ง Configuration

Environment Variables

Variable Description Default
OPENAI_API_KEY OpenAI API key (required) -
OPENAI_MODEL OpenAI model to use gpt-4
VECTOR_STORE_TYPE Vector store type chroma
CHROMA_PERSIST_DIRECTORY ChromaDB storage path ./chroma_db
HOST Server host 0.0.0.0
PORT Server port 8000
SERP_API_KEY SerpAPI key for web search -
LANGCHAIN_TRACING_V2 Enable LangSmith tracing false
LANGCHAIN_API_KEY LangSmith API key -

๐Ÿ› ๏ธ Usage Examples

Python Client

import requests

# Chat with RAG
response = requests.post("http://localhost:8000/chat/", json={
    "question": "What are the main points in the uploaded documents?",
    "use_rag": True
})
print(response.json()["answer"])

# Chat with Agent
response = requests.post("http://localhost:8000/chat/", json={
    "question": "What's 15 * 23?",
    "use_agent": True
})
print(response.json()["answer"])

cURL Examples

# Simple chat
curl -X POST "http://localhost:8000/chat/" \
  -H "Content-Type: application/json" \
  -d '{"question": "Hello, how are you?"}'

# Upload document
curl -X POST "http://localhost:8000/ingest/upload" \
  -F "file=@document.pdf"

# Get system stats
curl "http://localhost:8000/chat/stats"

๐Ÿงช Testing

Manual Testing

  1. Start the server: python -m app.main
  2. Open Swagger docs: http://localhost:8000/docs
  3. Test endpoints through the interactive interface

API Testing

# Health check
curl http://localhost:8000/health

# Upload a test document
curl -X POST "http://localhost:8000/ingest/upload" \
  -F "file=@test_document.pdf"

# Ask a question
curl -X POST "http://localhost:8000/chat/" \
  -H "Content-Type: application/json" \
  -d '{"question": "What is this document about?"}'

๐Ÿ” Features in Detail

RAG Pipeline

  1. Document Ingestion: Upload PDFs, TXTs, MDs, DOCXs
  2. Text Processing: Split documents into chunks
  3. Embedding: Convert text to vectors using OpenAI
  4. Storage: Store in ChromaDB vector database
  5. Retrieval: Find relevant documents for queries
  6. Generation: Generate answers using LLM with context

LangChain Agent

  • Calculator Tool: Perform mathematical calculations
  • Google Search Tool: Search the web for current information
  • Conversational Memory: Maintains chat history
  • Multi-step Reasoning: Chain multiple tools together

Document Processing

  • PDF: PyPDF2 for text extraction
  • TXT: UTF-8 text files
  • MD: Markdown files
  • DOCX: Microsoft Word documents
  • Chunking: Intelligent text splitting with overlap
  • Metadata: Preserves source information

๐Ÿš€ Production Deployment

Docker (Recommended)

FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Environment Setup

# Production environment
export OPENAI_API_KEY=your_production_key
export OPENAI_MODEL=gpt-4
export HOST=0.0.0.0
export PORT=8000

Security Considerations

  • Set up proper CORS configuration
  • Use environment variables for secrets
  • Implement authentication if needed
  • Monitor API usage and costs
  • Set up logging and monitoring

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

๐Ÿ“„ License

This project is licensed under the MIT License.

๐Ÿ™ Acknowledgments


Built with โค๏ธ for the AI community Testing coauthor achievement Fixing coauthor format