A sophisticated multi-agent coding assistant powered by LangGraph and Ollama, designed to help with complex software development tasks through specialized AI agents.
- Multi-Agent Architecture: Coordinated agents working together for complex tasks
- Local AI Models: Uses Ollama for privacy and control over your AI models
- Per-Agent Model Optimization: Different models optimized for each agent's specialty
- Automatic Model Management: Download required models on first run
- Automatic Background Indexing: Real-time codebase analysis with file watching
- Flexible Indexing Control: Enable/disable automatic indexing with CLI options
- Command Safety Controls: Three security modes for system command execution
- Specialized Agents (10 total):
- ๐ WebSearchAgent (
llama3.1:8b): Searches for documentation, examples, best practices - ๐ CodeReviewAgent (
qwen2.5-coder:32b): Analyzes code quality, bugs, security issues - ๐ CodeAnalyzerAgent (
qwen2.5-coder:32b): Indexes codebases, tracks dependencies, AST analysis - ๐ FileOperationsAgent (
qwen2.5-coder:32b): Direct file manipulation - read, write, edit files - ๐งช TestGeneratorAgent (
qwen2.5-coder:32b): Creates comprehensive unit and integration tests - ๐ RefactoringAgent (
qwen2.5-coder:32b): Code optimization, design patterns, modernization - ๐ GitAgent (
llama3.1:8b): Git workflows, commit messages, merge conflict resolution - ๐ DocumentationAgent (
llama3.1:8b): README files, API docs, docstrings, user guides - ๐ฅ๏ธ CommandLineAgent (
llama3.1:8b): System command execution with safety controls - ๐ฏ SupervisorAgent (
qwen2.5:32b): Orchestrates multi-agent workflows using LangGraph
- ๐ WebSearchAgent (
- Python 3.9+
- Ollama installed and running locally
- qwen2.5-coder:32b model (or your preferred coding model)
- Clone the repository:
git clone git@github.com:adam-hanna/ollama-agentic-coder.git
cd ollama-agentic-coder- Create a virtual environment:
python -m venv .venv
source ./.venv/bin/activate # On Windows: .venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt -r requirements-dev.txt- Install Ollama (if not already installed):
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Windows - Download from https://ollama.ai/download- Pull the models:
ollama pull qwen2.5-coder:32b
ollama pull qwen2.5:32b
ollama pull llama3.1:8bThe system can be configured via environment variables or the default config:
# Ollama settings
export OLLAMA_HOST="http://localhost:11434"
export OLLAMA_MODEL="qwen2.5-coder:32b"
export OLLAMA_TEMPERATURE="0.1"
export OLLAMA_MAX_TOKENS="4096"
export OLLAMA_TIMEOUT="300"You can use any Ollama-compatible model:
# Other excellent coding models
ollama pull deepseek-coder:33b
ollama pull codellama:34b
ollama pull starcoder2:15bThe CommandLineAgent operates in three safety modes:
- SAFE (default): Only pre-approved read-only commands (ls, cat, grep, etc.)
- WHITELIST: Safe commands + user-approved commands (prompts for approval)
- YOLO: All commands allowed (use with extreme caution)
Configure via environment variable:
export COMMAND_SAFETY_MODE="safe" # or "whitelist" or "yolo"- Run ollama:
ollama serve- Run the application:
python main.pypython main.py --help
python main.py --host http://localhost:11434 --model qwen2.5-coder:32b --temperature 0.1
# Indexing control options
python main.py --no-auto-index # Disable automatic background indexing
python main.py --no-file-watch # Disable file change watching
python main.py --no-auto-index --no-file-watch # Disable bothOnce running, you can use these commands:
/help- Show available commands and indexing status/index- Manual indexing commands:/index- Index current directory/index [path]- Index specific directory/index auto- Toggle automatic background indexing/index stop- Stop background indexing
/config- Show current configuration/agents- List available agents and their assigned models/clear- Clear conversation history/exit- Exit the application
Code Review:
You: Review this Python file for bugs and security issues: src/auth.py
File Operations:
You: Create a new config.py file with database settings
You: Edit the main.py file to add logging configuration
Test Generation:
You: Generate comprehensive unit tests for the User class
You: Create integration tests for the authentication workflow
Refactoring:
You: Analyze auth.py for refactoring opportunities and suggest improvements
You: Refactor this complex function to use better design patterns
Git Operations:
You: Generate a commit message for my current changes
You: Help me resolve this merge conflict in src/models.py
Documentation:
You: Create a README for this project
You: Add comprehensive docstrings to all functions in this file
Web Search:
You: Search for best practices for async Python error handling
Command Line Operations:
You: List all Python files in the current directory
You: Check the status of the development server
You: Run the test suite and show me the results
Complex Multi-Agent Tasks:
You: I need to implement user authentication. Create the models, generate tests, add documentation, and suggest a Git workflow.
You: Refactor my authentication system, generate tests for the changes, and update the documentation.
SupervisorAgent (qwen2.5:32b) - LangGraph Orchestration
โโโ WebSearchAgent (llama3.1:8b) - DuckDuckGo + Web Search
โโโ CodeReviewAgent (qwen2.5-coder:32b) - AST + Static Analysis
โโโ CodeAnalyzerAgent (qwen2.5-coder:32b) - Tree-sitter + Indexing
โโโ FileOperationsAgent (qwen2.5-coder:32b) - File System Operations
โโโ TestGeneratorAgent (qwen2.5-coder:32b) - Test Creation & Coverage
โโโ RefactoringAgent (qwen2.5-coder:32b) - Code Optimization
โโโ GitAgent (llama3.1:8b) - Version Control Operations
โโโ DocumentationAgent (llama3.1:8b) - Documentation Generation
โโโ CommandLineAgent (llama3.1:8b) - System Command Execution
- User Input โ SupervisorAgent analyzes request
- Task Analysis โ AI determines which agent(s) are needed
- Agent Selection โ Route to appropriate specialized agent(s)
- Background Processing โ Real-time codebase indexing if enabled
- Agent Execution โ Specialized agents work on their tasks
- Result Synthesis โ SupervisorAgent combines results into coherent response
- Coding Tasks:
qwen2.5-coder:32b- Specialized for code analysis, generation, review - General Tasks:
llama3.1:8b- Lighter, faster for search, git, documentation - Orchestration:
qwen2.5:32b- Strong reasoning for task coordination
- Create a new agent class inheriting from
BaseAgent:
from core.base_agent import BaseAgent, AgentState
class MyCustomAgent(BaseAgent):
def _default_system_prompt(self) -> str:
return "Your specialized agent prompt here"
async def process(self, state: AgentState) -> AgentState:
# Your agent logic here
pass- Register it in the SupervisorAgent workflow
The system supports any Ollama model. You can customize per-agent models in core/config.py:
models: Dict[str, str] = {
"supervisor": "qwen2.5:32b",
"websearch": "llama3.1:8b",
"code_review": "qwen2.5-coder:32b",
"file_operations": "qwen2.5-coder:32b",
# ... customize as needed
}Popular model choices:
qwen2.5-coder:32b- Excellent for coding tasksdeepseek-coder:33b- Strong coding capabilitiescodellama:34b- Meta's coding modelstarcoder2:15b- Smaller but capablellama3.1:8b- Fast general purpose modelqwen2.5:32b- Strong reasoning for coordination
# Run the test suite
python -m pytest tests/
# Test specific components
python -m pytest tests/test_agents.pylanggraph-ollama-agent/
โโโ core/ # Core components
โ โโโ config.py # Configuration management
โ โโโ ollama_client.py # Ollama API client
โ โโโ base_agent.py # Base agent class
โโโ agents/ # Specialized agents
โ โโโ websearch_agent.py
โ โโโ code_review_agent.py
โ โโโ code_analyzer_agent.py
โ โโโ file_operations_agent.py
โ โโโ test_generator_agent.py
โ โโโ refactoring_agent.py
โ โโโ git_agent.py
โ โโโ documentation_agent.py
โ โโโ command_line_agent.py
โ โโโ supervisor_agent.py
โโโ cli/ # Command line interface
โ โโโ main.py
โโโ utils/ # Utility functions
โโโ tests/ # Test files
โโโ requirements.txt # Dependencies
โโโ main.py # Entry point
โโโ README.md # This file
Ollama Connection Error:
# Check if Ollama is running
ollama list
# Start Ollama if needed
ollama serveModel Not Found:
# Pull the required model
ollama pull qwen2.5-coder:32bMemory Issues:
- Use smaller models like
qwen2.5-coder:14borstarcoder2:7b - Reduce
max_tokensin configuration
Slow Performance:
- Use GPU acceleration if available
- Reduce model size (use smaller variants like
qwen2.5-coder:14b) - Disable background indexing:
--no-auto-index - Disable file watching:
--no-file-watch - Adjust temperature settings
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- LangGraph for the multi-agent orchestration
- Ollama for local LLM serving
- Qwen2.5-Coder for the excellent coding model
- Tree-sitter for code parsing
-
File system operations (read, write, edit)โ Implemented -
Git integration for version controlโ Implemented -
Test generation and coverage analysisโ Implemented -
Code refactoring and optimizationโ Implemented -
Documentation generationโ Implemented -
Command line operations with safety controlsโ Implemented -
Automatic model downloading and managementโ Implemented - Integration with popular IDEs (VS Code extension)
- Web interface option
- Custom workflow definitions via YAML/JSON
- Plugin system for extending agents
- Collaborative multi-user sessions
- Database integration agent
- Deployment and DevOps agent
- Security scanning and vulnerability assessment
- Performance profiling and optimization agent