Maccarone lets you delegate sections of your Python program to AI ownership.
Here's what it looks like in the VS Code extension:
You might write some code like this:
def main(path: str):
#<<filenames = a list of filenames under path>>
for fn in filenames:
#<<size = size of fn in bytes>>
print(fn, size)
#<<use argparse and call main>>
Maccarone then fills in the sections you've delegated:
def main(path: str):
#<<filenames = list of filenames under path; no dirs>>
import os
filenames = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
#<</>>
for fn in filenames:
#<<size = size of fn in bytes>>
size = os.path.getsize(os.path.join(path, fn))
#<</>>
print(fn, size)
#<<use argparse and call main>>
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("path", type=str)
args = parser.parse_args()
main(args.path)
#<</>>
Make a change in your code, like adding an extension
parameter to main
, and Maccarone keeps its sections up to date:
def main(path: str, extension: str | None = None):
#<<filenames = list of filenames under path; no dirs>>
…
if extension:
filenames = [f for f in filenames if f.endswith(extension)]
#<</>>
…
#<<use argparse and call main>>
…
parser.add_argument("--extension", type=str, default=None)
args = parser.parse_args()
main(args.path, args.extension)
#<</>>
- Python 3.8+
- OpenAI API key with GPT-4 (
export OPENAI_API_KEY
)
Easy mode is the free extension from the VS Code marketplace.
Install it in VS Code and you're done (if you have the prerequisites above).
If you don't use VS Code, you can still install Maccarone directly from PyPI:
pip install maccarone
Then run maccarone
to generate code and update your source file:
$ maccarone --rewrite examples/file_sizes.py
Maccarone can rewrite all files in a directory:
$ maccarone --rewrite --suffix .py examples/
Be careful! You should probably run this only on files in source control, for example.
Maccarone prompts GPT-4 to write code. It will make OpenAI API calls using your key and you will be charged by OpenAI.
API calls are made every time Maccarone preprocesses a new version of a source file.
The number of tokens consumed is proportional to the size of your completed code. You cannot accurately predict that number in advance. A small source module might cost $0.01–0.10 to preprocess.
The strength of your faith in GPT-4.
They are likely to work, but less likely than English.
https://en.wikipedia.org/wiki/Macaronic_language
Yes and no. It was created to evaluate a specific flavor of LLM-assisted programming. It feels feature-complete for that purpose.
PRs and bug reports are welcome, however, and there may be future maintenance releases.