/auto-code-rover

Autonomous program improvement

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

AutoCodeRover: Autonomous Program Improvement

overall-workflow

ArXiv Paper

📣 Updates

  • [April 19, 2024] AutoCodeRover now supports running on GitHub issues and local issues! Feel free to try it out and we welcome your feedback!

👋 Overview

AutoCodeRover is a fully automated approach for resolving GitHub issues (bug fixing and feature addition) where LLMs are combined with analysis and debugging capabilities to prioritize patch locations ultimately leading to a patch.

AutoCodeRover resolves ~16% of issues of SWE-bench (total 2294 GitHub issues) and ~22% issues of SWE-bench lite (total 300 GitHub issues), improving over the current state-of-the-art efficacy of AI software engineers.

AutoCodeRover works in two stages:

  • 🔎 Context retrieval: The LLM is provided with code search APIs to navigate the codebase and collect relevant context.
  • 💊 Patch generation: The LLM tries to write a patch, based on retrieved context.

✨ Highlights

AutoCodeRover has two unique features:

  • Code search APIs are Program Structure Aware. Instead of searching over files by plain string matching, AutoCodeRover searches for relevant code context (methods/classes) in the abstract syntax tree.
  • When a test suite is available, AutoCodeRover can take advantage of test cases to achieve an even higher repair rate, by performing statistical fault localization.

🗎 arXiv Paper

AutoCodeRover: Autonomous Program Improvement [arXiv 2404.05427]

First page of arXiv paper

For referring to our work, please cite and mention:

@misc{zhang2024autocoderover,
      title={AutoCodeRover: Autonomous Program Improvement},
      author={Yuntong Zhang and Haifeng Ruan and Zhiyu Fan and Abhik Roychoudhury},
      year={2024},
      eprint={2404.05427},
      archivePrefix={arXiv},
      primaryClass={cs.SE}
}

✔️ Example: Django Issue #32347

As an example, AutoCodeRover successfully fixed issue #32347 of Django. See the demo video for the full process:

acr-final.mp4

Enhancement: leveraging test cases

AutoCodeRover can resolve even more issues, if test cases are available. See an example in the video:

acr_enhancement-final.mp4

🚀 Setup & Running

We recommend running AutoCodeRover in a Docker container.

Set the OPENAI_KEY env var to your OpenAI key:

export OPENAI_KEY=sk-YOUR-OPENAI-API-KEY-HERE

Build and start the docker image:

docker build -f Dockerfile -t acr .
docker run -it -e OPENAI_KEY="${OPENAI_KEY:-OPENAI_API_KEY}" acr

Alternatively, you can use Dockerfile.scratch which supports arm64 (Apple silicon) and ppc in addition to amd64. Dockerfile.scratch will build both SWE-bench (from https://github.com/yuntongzhang/SWE-bench.git) and ACR.

docker build -f Dockerfile.scratch -t acr .

There are build args for customizing the build in Dockerfile.scratch like this:

docker build --build-arg GIT_EMAIL=your@email.com --build-arg GIT_NAME=your_id \
       --build-arg SWE_BENCH_REPO=https://github.com/your_id/SWE-bench.git \
       -f Dockerfile.scratch -t acr .

After setting up, we can run ACR in three modes:

  1. GitHub issue mode: Run ACR on a live GitHub issue by providing a link to the issue page.
  2. Local issue mode: Run ACR on a local repository and a file containing the issue description.
  3. SWE-bench mode: Run ACR on SWE-bench task instances.

[GitHub issue mode] Set up and run on new GitHub issues

If you want to use AutoCodeRover for new GitHub issues in a project, prepare the following:

  • Link to clone the project (used for git clone ...).
  • Commit hash of the project version for AutoCodeRover to work on (used for git checkout ...).
  • Link to the GitHub issue page.

Then, in the docker container (or your local copy of AutoCodeRover), run the following commands to set up the target project and generate patch:

cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py github-issue --output-dir output --setup-dir setup --model gpt-4-0125-preview --model-temperature 0.2 --task-id <task id> --clone-link <link for cloning the project> --commit-hash <any version that has the issue> --issue-link <link to issue page>

Here is an example command for running ACR on an issue from the langchain GitHub issue tracker:

PYTHONPATH=. python app/main.py github-issue --output-dir output --setup-dir setup --model gpt-4-0125-preview --model-temperature 0.2 --task-id langchain-20453 --clone-link https://github.com/langchain-ai/langchain.git --commit-hash cb6e5e5 --issue-link https://github.com/langchain-ai/langchain/issues/20453

The <task id> can be any string used to identify this issue.

If patch generation is successful, the path to the generated patch will be printed in the end.

[Local issue mode] Set up and run on local repositories and local issues

Instead of cloning a remote project and run ACR on an online issue, you can also prepare the local repository and issue beforehand, if that suits the use case.

For running ACR on a local issue and local codebase, prepare a local codebase and write an issue description into a file, and run the following commands:

cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py local-issue --output-dir output --model gpt-4-0125-preview --model-temperature 0.2 --task-id <task id> --local-repo <path to the local project repository> --issue-file <path to the file containing issue description>

If patch generation is successful, the path to the generated patch will be printed in the end.

[SWE-bench mode] Set up and run on SWE-bench tasks

This mode is for running ACR on existing issue tasks contained in SWE-bench.

Set up

In the docker container, we need to first set up the tasks to run in SWE-bench (e.g., django__django-11133). The list of all tasks can be found in conf/swe_lite_tasks.txt.

The tasks need to be put in a file, one per line:

cd /opt/SWE-bench
echo django__django-11133 > tasks.txt

Or if running on arm64 (e.g. Apple silicon), try this one which doesn't depend on Python 3.6 (which isn't supported in this env):

echo django__django-16041 > tasks.txt

Then, set up these tasks by running:

cd /opt/SWE-bench
conda activate swe-bench
python harness/run_setup.py --log_dir logs --testbed testbed --result_dir setup_result --subset_file tasks.txt

Once the setup for this task is completed, the following two lines will be printed:

setup_map is saved to setup_result/setup_map.json
tasks_map is saved to setup_result/tasks_map.json

The testbed directory will now contain the cloned source code of the target project. A conda environment will also be created for this task instance.

If you want to set up multiple tasks together, put their ids in tasks.txt and follow the same steps.

Run a single task in SWE-bench

Before running the task (django__django-11133 here), make sure it has been set up as mentioned above.

cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py swe-bench --model gpt-4-0125-preview --setup-map ../SWE-bench/setup_result/setup_map.json --tasks-map ../SWE-bench/setup_result/tasks_map.json --output-dir output --task django__django-11133

The output of the run can then be found in output/. For example, the patch generated for django__django-11133 can be found at a location like this: output/applicable_patch/django__django-11133_yyyy-MM-dd_HH-mm-ss/extracted_patch_1.diff (the date-time field in the directory name will be different depending on when the experiment was run).

Run multiple tasks in SWE-bench

First, put the id's of all tasks to run in a file, one per line. Suppose this file is tasks.txt, the tasks can be run with

cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py swe-bench --model gpt-4-0125-preview --setup-map ../SWE-bench/setup_result/setup_map.json --tasks-map ../SWE-bench/setup_result/tasks_map.json --output-dir output --task-list-file /opt/SWE-bench/tasks.txt

NOTE: make sure that the tasks in tasks.txt have all been set up in SWE-bench. See the steps above.

Using a config file

Alternatively, a config file can be used to specify all parameters and tasks to run. See conf/vanilla-lite.conf for an example. Also see EXPERIMENT.md for the details of the items in a conf file. A config file can be used by:

python scripts/run.py conf/vanilla-lite.conf

Experiment Replication

Please refer to EXPERIMENT.md for information on experiment replication.

✉️ Contacts

For any queries, you are welcome to open an issue.

Alternatively, contact us at: {yuntong,hruan,zhiyufan}@comp.nus.edu.sg.

Acknowledgements

This work was partially supported by a Singapore Ministry of Education (MoE) Tier 3 grant "Automated Program Repair", MOE-MOET32021-0001.