Pals are persistent, ergonomic LLM assistants designed to help you complete repetitive, hard-to-automate tasks quickly. When created, they automatically generate RStudio add-ins registered to keyboard shortcuts. After selecting some code, press the keyboard shortcut you’ve chosen and watch your code be rewritten.
Much of the documentation in this package is aspirational and its interface is likely to change rapidly. Note, especially, that keyboard shortcuts will have to registered in the usual way (via Tools > Modify Keyboard Shortcuts > search “Pal”), for now.
You can install pal like so:
pak::pak("simonpcouch/pal")
Then, ensure that you have an
ANTHROPIC_API_KEY
set in your
.Renviron
—see
usethis::edit_r_environ()
for more information. If you’d like to use an LLM other than Anthropic’s
Claude 3.5 Sonnet—like OpenAI’s ChatGPT—to power the pal, see
?pal()
for
information on how to set default metadata on that model.
To create a pal, simply pass pal()
a pre-defined “role” and a
keybinding you’d like it attached to. For example, to use the cli
pal with the
shortcut Cmd+;+C
(written
Cmd+; Cmd+C
):
pal("cli", "Cmd+; Cmd+C")
Then, highlight some code, press the keyboard shortcut, and watch your code be rewritten:
As-is, the package provides ergonomic LLM assistants for R package development:
"cli"
withCmd+;+C
: Convert to cli"testthat"
withCmd+;+T
: Convert to testthat 3"roxygen"
withCmd+;+R
: Document functions with roxygen
That said, the package provides infrastructure for others to make LLM assistants for any task in R, from authoring to interactive data analysis. With only a markdown file and a function call, users can extend pal to assist with their own repetitive but hard-to-automate tasks.
The cost of using pals depends on 1) the length of the underlying prompt for a given pal and 2) the cost per token of the chosen model. Using the cli pal with Anthropic’s Claude Sonnet 3.5, for example, costs something like $15 per 1,000 code refactorings, while using the testthat pal with OpenAI’s GPT 4o-mini would cost something like $1 per 1,000 refactorings. Pals using a locally-served LLM are “free” (in the usual sense of code execution, ignoring the cost of increased battery usage).