/Devon

Devon: An open-source pair programmer

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

Devon: An open-source pair programmer

Contributors Forks Stargazers Issues
Apache 2.0 License Join our Discord community
demo.mp4

How do y'all ship so quickly?

Join our Discord community ← We have a community-driven Dev Team for this repo. Come join us! It's great.

Installation

Prerequisites

  1. node.js and npm
  2. pipx, if you don't have this go here
  3. API Key (just one is required)

We're currently working on supporting Windows! (Let us know if you can help)

Installation commands

To install, simply run:

curl -sSL https://raw.githubusercontent.com/entropy-research/Devon/main/install.sh | bash

Or to install using pipx + npm:

# For the backend
pipx install devon_agent

# For the terminal ui
npm install -g devon-tui

# And for the main UI
npx devon-ui

If you already have devon_agent or devon-tui installed, update it by running:

npm uninstall -g devon-tui
npm install -g devon-tui

pipx install --force devon_agent

This installs the Python backend, and the cli command to run the tool

Thats it! Happy building :)

Running the agent

Then to run the main ui, the command is:

npx devon-ui

It's that simple.

For the terminal ui:

  1. Navigate to your project folder and open the terminal.
  2. Set your Anthropic API or OpenAI API key as an environment variable:
export ANTHROPIC_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

#OR

export OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

#OR

export GROQ_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  1. Then to run the terminal-ui, the command is:
devon-tui

It's as easy as that.

Note

Don't worry, the agent will be able to only access files and folders in the directory you started it from. You can also correct it while it's performing actions.


To run in debug mode, the command is:

devon-tui --debug

To run in local mode:

Warning

The current version of local model support is not mature, proceed with caution, and expect the performance to degrade significantly compared to the other options.

  1. Get deepseek running with ollama

  2. Start the local ollama server by running

ollama run deepseek-coder:6.7b
  1. Then configure devon to use the model
devon-tui configure

Configuring Devon CLI...
? Select the model name: 
  claude-opus 
  gpt4-o 
  llama-3-70b 
❯ ollama/deepseek-coder:6.7b
  1. And finally, run it with:
devon-tui --api_key=FOSS

For a list of all commands available:

devon-tui --help

Features

  • Multi-file editing
  • Codebase exploration
  • Config writing
  • Test writing
  • Bug fixing
  • Architecture exploration
  • Local Model Support

Limitations

  • Minimal functionality for non-Python languages
  • Sometimes have to specify the file where you want the change to happen
  • Local mode is not good right now. Please try to avoid using it.

Progress

This project is still super early and we would love your help to make it great!

Current goals

  • Multi-model support
    • Claude 3 Opus
    • GPT4-o
    • Groq llama3-70b
    • Ollama deepseek-6.7b
    • Google Gemini 1.5 Pro
  • Launch plugin system for tool and agent builders
  • Improve our self-hostable Electron app
  • Set SOTA on SWE-bench Lite

View our current thoughts on next steps here

Star history

Star History Chart

Past milestones

  • June 14, 2024 - Launch Electron UI v0.0.13
  • June 1, 2024 - Devon V2 Beta Electron UI
  • May 19, 2024 - GPT4o support + better interface support v0.1.7
  • May 10, 2024 - Complete interactive agent v0.1.0
  • May 10, 2024 - Add steerability features
  • May 8, 2024 - Beat AutoCodeRover on SWE-Bench Lite
  • Mid April, 2024 - Add repo level code search tooling
  • April 2, 2024 - Begin development of v0.1.0 interactive agent
  • March 17, 2024 - Launch non-interactive agent v0.0.1

Current development priorities

  1. Improve context gathering and code indexing abilities ex:
    • Adding memory modules
    • Improved code indexing
  2. Add alternative models and agents to:
    • a) Reduce end user cost and
    • b) Reduce end user latency
  3. Electron app
    • Better code diff view
    • Timeline interface
    • Send user file events/changes to Devon

How can I contribute?

Devon and the entropy-research org are community-driven, and we welcome contributions from everyone! From tackling issues to building features to creating datasets, there are many ways to get involved:

  • Core functionality: Help us develop the core agents, user experience, tool integrations, plugins, etc.
  • Research: Help us research agent performance (including benchmarks!), build data pipelines, and finetune models.
  • Feedback and Testing: Use Devon, report bugs, suggest features, or provide feedback on usability.

For details, please check CONTRIBUTING.md.

If you would like to contribute to the project, please join the discord: Discord

Feedback

We would love feedback! Feel free to drop us a note on our Discord in the #feedback channel, or create issues!

We collect basic event type (i.e. "tool call") and failure telemetry to solve bugs and improve the user experience, but if you want to reach out, we would love to hear from you!

To disable telemetry, set the environment variable DEVON_TELEMETRY_DISABLED to true

export DEVON_TELEMETRY_DISABLED=true

Community

Join our Discord server and say hi! Discord

License

Distributed under the AGPL License. See LICENSE for more information.