An OpenAI LLM based CLI coding assistant.
llm-code
is inspired by
Simon Wilson's
llm package. It takes a similar approach of developing
a simple tool to create an LLM based assistant that helps write code.
pipx install llm-code
llm-code
requires an OpenAI API key. You can get one from OpenAI.
You can set the key in a few different ways, depending on your preference:
- Set the
OPENAI_API_KEY
environment variable.
export OPENAI_API_KEY = sk-...
- Use an env file in ~/.llm_code/env
mkdir -p ~/.llm_code
echo "OPENAI_API_KEY=sk-..." > ~/.llm_code/env
llm-code
is meant to be simple to use. The default prompts should be good enough. There are two broad modes:
- Generage some code from scratch.
llm-code "write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints."
- Give in some input files and ask for changes.
llm-code -i my_file.py "add docstrings to all python functions."
llm-code --help
Usage: llm-code [OPTIONS] [INSTRUCTIONS]...
Coding assistant using OpenAI's chat models.
Requires OPENAI_API_KEY as an environment variable. Alternately, you can set
it in ~/.llm_code/env.
Options:
-i, --inputs TEXT Glob of input files. Use repeatedly for multiple files.
-cb, --clipboard Copy code to clipboard.
-nc, --no-cache Don't use cache.
-4, --gpt-4 Use GPT-4.
--version Show version.
--help Show this message and exit.
Any of the OpenAI parameters can be changed using environment variables. GPT-4 is one exception: you can also set it using -4
for convenience.
export MAX_TOKENS=2000
export TEMPERATURE=0.5
export MODEL=gpt-4
or
llm-code -4 ...
A common usage pattern is to examine the output of a model and either accept it, or continue to play around with the prompts. When "accepting" the output, a common thing is to append it to a file, or copy it to the clipboard (using pbcopy
on a mac, for example.). To facilitate this workflow of inspection and acceptance, llm-code
caches the output of the model in a local sqlite database. This allows you to replay the same query without having to hit the OpenAI API.
llm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.'
Following this, assuming you like the output:
llm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.' > sum.py
Borrowing simonw's excellent idea of logging things to a local sqlite, as demonstrated in llm
, llm-code
also logs all queries to a local sqlite database. This is useful for a few reasons:
- It allows you to replay the same query without having to hit the OpenAI API.
- It allows you to see what queries you've made in the past with responses, and number of tokens used.
Simple hello world.
llm-code write hello world in rust
fn main() {
println!("Hello, world!");
}
Sum of two numbers with type hints.
llm-code "write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints."
from typing import List
def sum_numbers(numbers: List[int]) -> int:
return sum(numbers)
Lets assume that we stuck the output of the previous call in out.py
. We can now say:
llm-code -i out.py "add appropriate docstrings"
from typing import List
def sum_numbers(numbers: List[int]) -> int:
"""Return the sum of the given list of numbers."""
return sum(numbers)
Or we could write some unit tests.
llm-code -i out.py "write a complete unit test file using pytest.
import pytest
from typing import List
from my_module import sum_numbers
def test_sum_numbers():
assert sum_numbers([1, 2, 3]) == 6
assert sum_numbers([-1, 0, 1]) == 0
assert sum_numbers([]) == 0
- Add a simple cache to replay the same query.
- Add logging to a local sqllite db.
- Add an
--exec
option to execute the generated code. - Add a
--stats
option to output token counts. - Add
pyperclip
integration to copy to clipboard.