/trae-agent

Trae Agent is an LLM-based agent for general purpose software engineering tasks.

Primary LanguagePythonMIT LicenseMIT

Trae Agent

arXiv:2507.23370 Python 3.12+ License: MIT Pre-commit Unit Tests Discord

Trae Agent is an LLM-based agent for general purpose software engineering tasks. It provides a powerful CLI interface that can understand natural language instructions and execute complex software engineering workflows using various tools and LLM providers.

For technical details please refer to our technical report.

Project Status: The project is still being actively developed. Please refer to docs/roadmap.md and CONTRIBUTING if you are willing to help us improve Trae Agent.

Difference with Other CLI Agents: Trae Agent offers a transparent, modular architecture that researchers and developers can easily modify, extend, and analyze, making it an ideal platform for studying AI agent architectures, conducting ablation studies, and developing novel agent capabilities. This research-friendly design enables the academic and open-source communities to contribute to and build upon the foundational agent framework, fostering innovation in the rapidly evolving field of AI agents.

✨ Features

  • 🌊 Lakeview: Provides short and concise summarisation for agent steps
  • 🤖 Multi-LLM Support: Works with OpenAI, Anthropic, Doubao, Azure, OpenRouter, Ollama and Google Gemini APIs
  • 🛠️ Rich Tool Ecosystem: File editing, bash execution, sequential thinking, and more
  • 🎯 Interactive Mode: Conversational interface for iterative development
  • 📊 Trajectory Recording: Detailed logging of all agent actions for debugging and analysis
  • ⚙️ Flexible Configuration: YAML-based configuration with environment variable support
  • 🚀 Easy Installation: Simple pip-based installation

🚀 Quick Start

Installation

We strongly recommend using uv to setup the project.

git clone https://github.com/bytedance/trae-agent.git
cd trae-agent
uv venv
uv sync --all-extras

# Activate the virtual environment
source .venv/bin/activate

or use make.

make uv-venv
make uv-sync

# Activate the virtual environment
source .venv/bin/activate

Setup API Keys

We recommend to configure Trae Agent using the config file.

Configuration Setup:

  1. Copy the example configuration file:

    cp trae_config.yaml.example trae_config.yaml
  2. Edit trae_config.yaml and replace the placeholder values with your actual credentials:

    • Replace your_anthropic_api_key with your actual Anthropic API key
    • Add additional model providers as needed (OpenAI, Google, Azure, etc.)
    • Configure your preferred models and settings
  3. (Optional) Add mcp_servers section to enable agent to call MCP services: You can configure MCP services by adding an mcp_servers section in trae_config.json. Here's an example configuration for integrating Playwright MCP:

       {
          "default_provider": "anthropic",
          "max_steps": 20,
          "enable_lakeview": true,
          "mcp_servers": {
             "playwright": {
                "command": "npx",
                "args": [
                "@playwright/mcp@0.0.27"
                ]
             }
          }
       }

Note: The trae_config.yaml file is ignored by git to prevent accidentally committing your API keys.

Legacy JSON Configuration: If you're using the older JSON configuration format, please refer to docs/legacy_config.md for instructions. We recommend migrating to the new YAML format.

You can also set your API keys as environment variables:

# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"

# For Anthropic
export ANTHROPIC_API_KEY="your-anthropic-api-key"

# For Doubao (also works with other OpenAI-compatible model providers)
export DOUBAO_API_KEY="your-doubao-api-key"
export DOUBAO_BASE_URL="your-model-provider-base-url"

# For OpenRouter
export OPENROUTER_API_KEY="your-openrouter-api-key"

# For Google Gemini
export GOOGLE_API_KEY="your-google-api-key"

# Optional: For OpenRouter rankings
export OPENROUTER_SITE_URL="https://your-site.com"
export OPENROUTER_SITE_NAME="Your App Name"

# Optional: If you want to use a specific openai compatible api provider, you can set the base url here
export OPENAI_BASE_URL="your-openai-compatible-api-base-url"

Although you can pass your API key directly using the api_key argument, we suggest utilizing python-dotenv to add MODEL_API_KEY="My API Key" to your .env file. This approach helps prevent your API key from being exposed in source control.

Basic Usage

# Run a simple task
trae-cli run "Create a hello world Python script"

# Run with Doubao
trae-cli run "Create a hello world Python script" --provider doubao --model doubao-seed-1.6

# Run with Google Gemini
trae-cli run "Create a hello world Python script" --provider google --model gemini-2.5-flash

📖 Usage

Command Line Interface

The main entry point is the trae command with several subcommands:

trae run - Execute a Task

# Basic task execution
trae-cli run "Create a Python script that calculates fibonacci numbers"

# With specific provider and model
trae-cli run "Fix the bug in main.py" --provider anthropic --model claude-sonnet-4-20250514

# Using OpenRouter with any supported model
trae-cli run "Optimize this code" --provider openrouter --model "openai/gpt-4o"
trae-cli run "Add documentation" --provider openrouter --model "anthropic/claude-3-5-sonnet"

# Using Google Gemini
trae-cli run "Implement a data parsing function" --provider google --model gemini-2.5-pro

# With custom working directory
trae-cli run "Add unit tests for the utils module" --working-dir /path/to/project

# Save trajectory for debugging
trae-cli run "Refactor the database module" --trajectory-file debug_session.json

# Force to generate patches
trae-cli run "Update the API endpoints" --must-patch

trae interactive - Interactive Mode

# Start interactive session
trae-cli interactive

# With custom configuration
trae-cli interactive --provider openai --model gpt-4o --max-steps 30

In interactive mode, you can:

  • Type any task description to execute it
  • Use status to see agent information
  • Use help for available commands
  • Use clear to clear the screen
  • Use exit or quit to end the session

trae show-config - Configuration Status

trae-cli show-config

# With custom config file
trae-cli show-config --config-file trae_config.yaml

Configuration

Trae Agent uses a YAML configuration file for settings. Please refer to the trae_config.yaml.example file in the root directory for the detailed configuration structure.

YAML Configuration Structure

The YAML configuration file is organized into several main sections:

  • agents: Configure agent behavior, tools, and models
  • lakeview: Configure the summarization feature
  • model_providers: Define API credentials and settings for different LLM providers
  • models: Define specific model configurations with parameters

Example YAML configuration:

agents:
  trae_agent:
    enable_lakeview: true
    model: trae_agent_model
    max_steps: 200
    tools:
      - bash
      - str_replace_based_edit_tool
      - sequentialthinking
      - task_done

model_providers:
  anthropic:
    api_key: your_anthropic_api_key
    provider: anthropic
  openai:
    api_key: your_openai_api_key
    provider: openai

models:
  trae_agent_model:
    model_provider: anthropic
    model: claude-sonnet-4-20250514
    max_tokens: 4096
    temperature: 0.5
    top_p: 1
    max_retries: 10
    parallel_tool_calls: true

WARNING: For Doubao users, please use the following base_url.

base_url=https://ark.cn-beijing.volces.com/api/v3/

Configuration Priority:

  1. Command-line arguments (highest)
  2. Configuration file values
  3. Environment variables
  4. Default values (lowest)
# Use GPT-4 through OpenRouter
trae-cli run "Write a Python script" --provider openrouter --model "openai/gpt-4o"

# Use Claude through OpenRouter
trae-cli run "Review this code" --provider openrouter --model "anthropic/claude-3-5-sonnet"

# Use Gemini through OpenRouter
trae-cli run "Generate docs" --provider openrouter --model "google/gemini-pro"

# Use Gemini directly
trae-cli run "Analyze this dataset" --provider google --model gemini-2.5-flash

# Use Qwen through Ollama
trae-cli run "Comment this code" --provider ollama --model "qwen3"

Popular OpenRouter Models:

  • openai/gpt-4o - Latest GPT-4 model
  • anthropic/claude-3-5-sonnet - Excellent for coding tasks
  • google/gemini-pro - Strong reasoning capabilities
  • meta-llama/llama-3.1-405b - Open source alternative
  • openai/gpt-4o-mini - Fast and cost-effective

Environment Variables

  • OPENAI_API_KEY - OpenAI API key
  • ANTHROPIC_API_KEY - Anthropic API key
  • GOOGLE_API_KEY - Google Gemini API key
  • OPENROUTER_API_KEY - OpenRouter API key
  • OPENROUTER_SITE_URL - (Optional) Your site URL for OpenRouter rankings
  • OPENROUTER_SITE_NAME - (Optional) Your site name for OpenRouter rankings

🛠️ Available Tools

Trae Agent provides a comprehensive toolkit for file editing, bash execution, structured thinking, task completion, and JSON manipulation, with new tools actively being developed and existing ones continuously enhanced.

For detailed information about all available tools and their capabilities, see docs/tools.md.

📊 Trajectory Recording

Trae Agent automatically records detailed execution trajectories for debugging and analysis:

# Auto-generated trajectory file
trae-cli run "Debug the authentication module"
# Saves to: trajectories/trajectory_20250612_220546.json

# Custom trajectory file
trae-cli run "Optimize the database queries" --trajectory-file optimization_debug.json

Trajectory files contain:

  • LLM Interactions: All messages, responses, and tool calls
  • Agent Steps: State transitions and decision points
  • Tool Usage: Which tools were called and their results
  • Metadata: Timestamps, token usage, and execution metrics

For more details, see docs/TRAJECTORY_RECORDING.md.

🤝 Contributing

For contribution guidelines, please refer to CONTRIBUTING.md.

📋 Requirements

  • Python 3.12+
  • API key for your chosen provider:
    • OpenAI API key (for OpenAI models)
    • Anthropic API key (for Anthropic models)
    • OpenRouter API key (for OpenRouter models)
    • Google API key (for Google Gemini models)

🔧 Troubleshooting

Common Issues

Import Errors:

# Try setting PYTHONPATH
PYTHONPATH=. trae-cli run "your task"

API Key Issues:

# Verify your API keys are set
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY
echo $GOOGLE_API_KEY
echo $OPENROUTER_API_KEY

# Check configuration
trae-cli show-config

Permission Errors:

# Ensure proper permissions for file operations
chmod +x /path/to/your/project

Command not found Errors:

# you can try
uv run trae-cli `xxxxx`

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

✍️ Citation

@article{traeresearchteam2025traeagent,
      title={Trae Agent: An LLM-based Agent for Software Engineering with Test-time Scaling},
      author={Trae Research Team and Pengfei Gao and Zhao Tian and Xiangxin Meng and Xinchen Wang and Ruida Hu and Yuanan Xiao and Yizhou Liu and Zhao Zhang and Junjie Chen and Cuiyun Gao and Yun Lin and Yingfei Xiong and Chao Peng and Xia Liu},
      year={2025},
      eprint={2507.23370},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2507.23370},
}

🙏 Acknowledgments

We thank Anthropic for building the anthropic-quickstart project that served as a valuable reference for the tool ecosystem.