A production-ready, intelligent agent framework for building scalable AI-driven automation systems with advanced routing, monitoring, and LangGraph orchestration.
The AI Agent Framework is a modular Python library that enables organizations to build sophisticated AI agents capable of processing emails, webhooks, and other data sources. It features multi-provider LLM support, intelligent routing, state management, and comprehensive tooling with visual debugging through LangGraph Studio.
βββ agents/ # Agent implementations (roles, tools, prompts)
βββ graphs/ # LangGraph orchestration workflows
βββ prompts/ # Separated prompt templates (YAML + Python)
βββ tools/ # Custom tools for agents
βββ memory/ # Persistent memory and state management
βββ services/ # Integration layers (API, CLI, UI)
βββ configs/ # Environment-specific configurations
βββ utils/ # Helper utilities (logging, validation)
βββ models/ # Data models and structures
βββ main.py # Main entry point
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Data Sources βββββΆβ LangGraph βββββΆβ Agents β
β β’ Email β β Orchestrator β β β’ Sales β
β β’ Webhooks β β β’ Route Request β β β’ Support β
β β’ API Calls β β β’ Process β β β’ Custom β
βββββββββββββββββββ β β’ Validate β βββββββββββββββββββ
β β’ Finalize β β
βββββββββββββββββββ ββββββββββββββββββββ β
β Monitoring ββββββββββββββββββββββββββββββββββββββ
β β’ LangSmith β ββββββββββββββββββββ
β β’ Metrics β β Responses β
β β’ Logging β β β’ Email Reply β
βββββββββββββββββββ β β’ API Response β
β β’ Webhook β
ββββββββββββββββββββ
- Python 3.11+ (Recommended: Python 3.13)
- Homebrew (for macOS)
- pip
- pipx (recommended for CLI tools)
-
Install Python 3.13 (macOS)
brew install python@3.13
-
Install pipx
brew install pipx pipx ensurepath # Add pipx to PATH
-
Install LangGraph CLI
pipx install "langgraph-cli[inmem]"
-
Clone the Repository
git clone https://github.com/yourusername/ai-multi-agent-framework.git cd ai-multi-agent-framework
-
Create Virtual Environment
/opt/homebrew/bin/python3.13 -m venv venv source venv/bin/activate
-
Install Dependencies
pip install -r requirements.txt
LangSmith provides advanced workflow visualization and debugging capabilities for your AI Agent Framework:
- Not required for basic functionality
- Offers deep insights into workflow execution
- Provides performance metrics and detailed error tracking
- Sign up at https://smith.langchain.com/
- Get an API key
- Set environment variables:
export LANGSMITH_API_KEY='your-api-key' export LANGCHAIN_TRACING_V2=true export LANGCHAIN_PROJECT=ai-agent-framework-demo
- Visualize complete workflow execution
- Step-by-step debugging
- Performance analysis
- Error tracking and insights
- The framework is primarily tested with Python 3.11 and 3.13
- Minimum supported version: Python 3.9
- For best experience, use Python 3.11+
- Ensures full compatibility with LangGraph CLI
- Access to latest language features
- Optimal performance
-
If using Python < 3.11:
- Some advanced features might be limited
- Potential compatibility warnings
- Recommended to upgrade Python version
-
API Key Requirements:
- OpenAI and/or Anthropic API keys are required
- Set these in the
.env
file or as environment variables - Without API keys, some functionalities will be restricted
# Clone and setup
git clone https://github.com/ravi-sharma/ai-agent-framework.git
cd ai-agent-framework
# Install all dependencies (Python + LangGraph CLI)
./install_dependencies.sh
# Configure environment
cp env.example .env
# Edit .env and add your API keys
# Start LangGraph development server
langgraph dev --port 3005
# In another terminal, run the demo
python3 run_langgraph_demo.py
# Open LangGraph Studio in browser
# https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:3005
The framework includes comprehensive visualization and debugging capabilities:
1. Start the Development Server
# Activate virtual environment
source venv/bin/activate
# Start LangGraph dev server
langgraph dev --port 3005
2. Access LangGraph Studio
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:3005
- API Docs: http://localhost:3005/docs
- Local API: http://127.0.0.1:3005
3. Run Workflows
# Run automated demo scenarios
echo "1" | python3 run_langgraph_demo.py
# Or run interactively
python3 run_langgraph_demo.py
Setup LangSmith Tracing:
# Add to your .env file
LANGSMITH_API_KEY=your-api-key-here
LANGSMITH_PROJECT=ai-agent-framework-demo
LANGCHAIN_TRACING_V2=true
View Traces:
- Dashboard: https://smith.langchain.com/
- Project traces show complete workflow execution
- Step-by-step debugging with input/output inspection
- Performance metrics and error tracking
The framework implements a sophisticated multi-agent workflow:
βββββββββββββββββββ
β route_request β βββ
βββββββββββββββββββ β
βΌ
βββββββββββββββββββββββββββββββ
β process_with_agent β
β β’ Sales Agent β
β β’ Support Agent β
β β’ Default Agent β
βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββ β
β validate_result β βββ
βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
βfinalize_responseβ
βββββββββββββββββββ
The included demo runs three scenarios:
-
Sales Inquiry β Routes to SalesAgent
- Input: "Interested in pricing for enterprise plan"
- Shows intelligent routing based on keywords
-
General Support β Routes to DefaultAgent
- Input: "How to reset my password"
- Demonstrates fallback handling
-
Product Demo β Intelligent routing
- Input: Complex form data with company info
- Shows advanced routing logic
# List available agents
python3 main.py cli list-agents
# Process email
python3 main.py cli process-email examples/email_samples/sales_inquiry.json
# Process webhook
python3 main.py cli process-webhook --data '{"type": "support", "message": "Help needed"}'
# Start API server
python3 main.py api --host 0.0.0.0 --port 8000
# Run with specific config
python3 main.py --config configs/prod_config.py api
The AI Agent Framework supports processing emails from two formats:
.eml
(standard email format).json
(custom JSON email representation)
-
EML Format:
- Standard email file format
- Parsed using Python's
email
module - Supports full email headers and multipart messages
-
JSON Format:
- Custom JSON structure for email representation
- Easier to generate programmatically
- Consistent with framework's input data model
JSON Email Structure:
{
"source": "email",
"data": {
"email": {
"subject": "Email Subject",
"sender": "sender@example.com",
"recipient": "recipient@example.com",
"body": "Email body text",
"headers": {
"Date": "Timestamp",
"Message-ID": "Unique message identifier"
}
}
}
}
# Process sales inquiry email
python3 main.py cli process-email examples/email_samples/sales_inquiry.json
# Process support email
python3 main.py cli process-email examples/email_samples/support_email.json
# Process demo request email
python3 main.py cli process-email examples/email_samples/demo_request.json
# Process with specific agent
python3 main.py cli process-email examples/email_samples/sales_inquiry.json --agent sales_agent
Pro Tips:
- Use the provided example files to quickly test email processing
- Modify example files or create new ones in
examples/email_samples/
- Specify an agent to override default routing
- Check logs for detailed processing information
You can customize email processing by:
- Modifying routing criteria in
graphs/multiagent_graph.py
- Adding custom keywords in
_select_agent
method - Extending agent capabilities in respective agent classes
Pro Tip: Use environment variables to configure email processing behavior dynamically.
# LLM Providers
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=your-key-here
# LangSmith (Optional)
LANGSMITH_API_KEY=your-key-here
LANGSMITH_PROJECT=ai-agent-framework
LANGCHAIN_TRACING_V2=true
# Framework Settings
LOG_LEVEL=INFO
ENVIRONMENT=development
# configs/custom_config.py
from configs.base_config import BaseConfig
class CustomConfig(BaseConfig):
def __init__(self):
super().__init__()
self.agents = {
"sales_agent": {
"enabled": True,
"llm_provider": "openai",
"model": "gpt-4",
"temperature": 0.7
},
"support_agent": {
"enabled": True,
"llm_provider": "anthropic",
"model": "claude-3-sonnet"
}
}
- Intelligent Routing: Automatic agent selection based on content analysis
- Specialized Agents: Sales, support, and custom agents with specific capabilities
- Fallback Handling: Graceful degradation to default agent when routing fails
- Visual Workflows: See your agent workflows in LangGraph Studio
- State Management: Persistent state across workflow steps
- Error Handling: Comprehensive error tracking and recovery
- Real-time Debugging: Step-through execution with full context
- LangSmith Integration: Complete trace visibility and debugging
- Performance Metrics: Execution time, success rates, error tracking
- Structured Logging: Comprehensive logging with context
- Health Checks: Built-in monitoring endpoints
- Scalable Architecture: Modular design for easy scaling
- Configuration Management: Environment-specific configurations
- Error Resilience: Comprehensive error handling and recovery
- Testing Suite: Unit, integration, and performance tests
- Trace Analysis: Complete workflow execution traces
- Performance Monitoring: Execution times and bottlenecks
- Error Tracking: Failed runs with detailed error context
- Agent Analytics: Success rates by agent type
- Visual Debugging: Interactive workflow visualization
- Real-time Execution: Watch workflows execute live
- State Inspection: Examine workflow state at each step
- Interactive Testing: Submit custom inputs and see results
# Enable debug logging
export LOG_LEVEL=DEBUG
# Run with tracing
export LANGCHAIN_TRACING_V2=true
python3 run_langgraph_demo.py
# Check server health
curl http://localhost:3005/ok
# Run all tests
python3 -m pytest tests/
# Run specific test categories
python3 -m pytest tests/test_agents.py
python3 -m pytest tests/integration/
python3 -m pytest tests/performance/
# Run with coverage
python3 -m pytest --cov=. tests/
# Load testing
python3 tests/performance/test_load_stress.py
# Build image
docker build -t ai-agent-framework .
# Run container
docker run -p 8000:8000 \
-e OPENAI_API_KEY=your-key \
-e LANGSMITH_API_KEY=your-key \
ai-agent-framework
# configs/prod_config.py
class ProductionConfig(BaseConfig):
def __init__(self):
super().__init__()
self.log_level = "INFO"
self.enable_metrics = True
self.rate_limiting = True
self.max_concurrent_requests = 100
- API Key Management: Secure handling of LLM provider keys
- Input Validation: Comprehensive input sanitization
- Rate Limiting: Built-in request rate limiting
- Audit Logging: Complete audit trail of all operations
- Async Processing: Non-blocking request handling
- Connection Pooling: Efficient LLM provider connections
- Caching: Intelligent response caching
- Load Balancing: Multi-instance deployment support
# agents/custom_agent.py
from agents.base_agent import BaseAgent
class CustomAgent(BaseAgent):
def __init__(self):
super().__init__()
self.name = "custom_agent"
async def process(self, input_data):
# Your custom logic here
return AgentResult(
success=True,
output={"response": "Custom response"},
agent_name=self.name
)
# Modify graphs/multiagent_graph.py
def _select_agent(self, input_data):
if "urgent" in str(input_data).lower():
return "priority_agent"
elif "technical" in str(input_data).lower():
return "technical_agent"
else:
return "default_agent"
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
LangGraph Studio won't connect:
- Ensure the dev server is running:
langgraph dev --port 3005
- Check the correct URL: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:3005
- Verify Python 3.11+ is being used
No traces in LangSmith:
- Verify
LANGSMITH_API_KEY
is set - Check
LANGCHAIN_TRACING_V2=true
- Ensure project name matches in LangSmith dashboard
Module import errors:
- Activate virtual environment:
source venv/bin/activate
- Reinstall dependencies:
pip install -r requirements.txt
# Test LangSmith connection
python3 -c "from langsmith import Client; print('Connected:', bool(Client().list_runs(limit=1)))"
# Validate environment
python3 -c "import os; print('Keys set:', bool(os.getenv('LANGSMITH_API_KEY')))"
# Test workflow without tracing
LANGCHAIN_TRACING_V2=false python3 run_langgraph_demo.py
- Documentation: This README covers all major features
- Issues: Report bugs via GitHub Issues
- Discussions: Join community discussions for questions
- Examples: Check the
examples/
directory for usage patterns
Built with β€οΈ using LangGraph, LangSmith, and modern Python practices.
-
Sign Up for LangSmith
- Visit: https://smith.langchain.com/
- Create a free account
- Get your API key
-
Configure Environment Variables
# In your .env file or export in terminal export LANGCHAIN_TRACING_V2=true export LANGSMITH_PROJECT='ai-agent-framework' export LANGSMITH_API_KEY='your-api-key-here'
-
Install LangGraph CLI
# Install LangGraph CLI with inmem support python3 -m pip install --user -U "langgraph-cli[inmem]" # Add to PATH (if needed) export PATH="/Users/$USER/Library/Python/3.9/bin:$PATH"
-
Start LangGraph Development Server
# Start the dev server with blocking operations allowed langgraph dev --port 3005 --allow-blocking
-
Run Email Processing
# Process an email with LangSmith tracing python3 main.py cli process-email examples/email_samples/sales_inquiry.json
-
View Workflow Visualization
- Open your browser to: http://localhost:3005
- Connect with your LangSmith API key
- View real-time workflow execution
-
Workflow Initialization
- Capture input context
- Generate unique workflow ID
- Prepare for routing
-
Agent Routing
- Analyze input data
- Select appropriate agent
- Log routing decision
-
Agent Processing
- Execute selected agent's logic
- Capture processing insights
- Log processing results
-
Result Validation
- Check response completeness
- Verify processing success
- Log validation status
-
Response Finalization
- Compile final output
- Add debugging metadata
- Log workflow completion
-
No Graph Appearing?
- Confirm API key is correct
- Ensure
LANGCHAIN_TRACING_V2
istrue
- Check network connection
- Verify LangSmith project settings
-
Common Issues
- Incorrect API key
- Network connectivity problems
- Firewall blocking LangSmith connections
# Programmatic LangSmith Configuration
from langsmith import Client
client = Client(
api_key=os.getenv('LANGSMITH_API_KEY'),
project=os.getenv('LANGSMITH_PROJECT', 'ai-agent-framework')
)
- Tracing Overhead: Minimal performance impact
- Data Captured:
- Workflow execution times
- Agent processing details
- Error tracking
- Routing decisions
- API key is sensitive; keep it confidential
- Use environment variables for configuration
- Avoid hardcoding credentials
- Rotate API keys periodically
Please feel free to reach out if you have any questions or need assistance with the framework. We're here to help!