fish-ai
adds AI functionality to Fish.
It's awesome! I built it to make my life easier, and I hope it will make
yours easier too. Here is the complete sales pitch:
- It can turn a comment into a shell command and vice versa, which means
less time spent
reading manpages, googling and copy-pasting from Stack Overflow. Great
when working with
git
,kubectl
,curl
and other tools with loads of parameters and switches. - Did you make a typo? It can also fix a broken command (similarly to
thefuck
). - Not sure what to type next or just lazy? Let the LLM autocomplete your commands with a built in fuzzy finder.
- Everything is done using two keyboard shortcuts, no mouse needed!
- It can be hooked up to the LLM of your choice (even a self-hosted one!).
- Everything is open source, hopefully somewhat easy to read and around 3000 lines of code, which means that you can audit the code yourself in an afternoon.
- Install and update with ease using
fisher
. - Tested on both macOS and Linux, but should run on any system where a supported version of Python and git is installed.
- Does not interfere with
fzf.fish
,tide
or any of the other plugins you're already using! - Does not wrap your shell, install telemetry or force you to switch to a proprietary terminal emulator.
This plugin was originally based on Tom DΓΆrr's fish.codex
repository.
Without Tom, this repository would not exist!
If you like it, please add a β. If you don't like it, create a PR. π
Install the plugin. You can install it using fisher
.
fisher install realiserad/fish-ai
Create a configuration file ~/.config/fish-ai.ini
.
If you use a self-hosted LLM:
[fish-ai]
configuration = self-hosted
[self-hosted]
provider = self-hosted
server = https://<your server>:<port>/v1
model = <your model>
api_key = <your API key>
If you are self-hosting, my recommendation is to use
Ollama with
Llama 3.1 70B. An out of the box
configuration running on localhost
could then look something
like this:
[fish-ai]
configuration = local-llama
[local-llama]
provider = self-hosted
model = llama3.1
server = http://localhost:11434/v1
If you use OpenAI:
[fish-ai]
configuration = openai
[openai]
provider = openai
model = gpt-4o
api_key = <your API key>
organization = <your organization>
If you use Azure OpenAI:
[fish-ai]
configuration = azure
[azure]
provider = azure
server = https://<your instance>.openai.azure.com
model = <your deployment name>
api_key = <your API key>
If you use Gemini:
[fish-ai]
configuration = gemini
[gemini]
provider = google
api_key = <your API key>
If you use Hugging Face:
[fish-ai]
configuration = huggingface
[huggingface]
provider = huggingface
email = <your email>
password = <your password>
model = meta-llama/Meta-Llama-3.1-70B-Instruct
Available models are listed here. Note that 2FA must be disabled on the account.
If you use Mistral:
[fish-ai]
configuration = mistral
[mistral]
provider = mistral
api_key = <your API key>
If you use GitHub Models:
[fish-ai]
configuration = github
[github]
provider = self-hosted
server = https://models.inference.ai.azure.com
api_key = <paste GitHub PAT here>
model = gpt-4o-mini
You can create a personal access token (PAT) here. The PAT does not require any permissions.
If you use Anthropic:
[anthropic]
provider = anthropic
api_key = <your API key>
Type a comment (anything starting with #
), and press Ctrl + P to turn it
into shell command!
You can also run it in reverse. Type a command and press Ctrl + P to turn it into a comment explaining what the command does.
Begin typing your command and press Ctrl + Space to display a list of
completions in fzf
(it is bundled
with the plugin, no need to install it separately). Completions load in the
background and show up as they become available.
If a command fails, you can immediately press Ctrl + Space at the command prompt
to let fish-ai
suggest a fix!
You can tweak the behaviour of fish-ai
by putting additional options in your
fish-ai.ini
configuration file.
To explain shell commands in a different language, set the language
option
to the name of the language. For example:
[fish-ai]
language = Swedish
This will only work well if the LLM you are using has been trained on a dataset with the chosen language.
Temperature is a decimal number between 0 and 1 controlling the randomness of
the output. Higher values make the LLM more creative, but may impact accuracy.
The default value is 0.2
.
Here is an example of how to increase the temperature to 0.5
.
[fish-ai]
temperature = 0.5
This option is not supported when using the huggingface
provider.
To change the number of completions suggested by the LLM when pressing
Ctrl + Space, set the completions
option. The default value is 5
.
Here is an example of how you can increase the number of completions to 10
:
[fish-ai]
completions = 10
You can personalise completions suggested by the LLM by sending an excerpt of your commandline history.
To enable it, specify the maximum number of commands from the history
to send to the LLM using the history_size
option. The default value
is 0
(do not send any commandline history).
[fish-ai]
history_size = 5
If you enable this option, consider the use of sponge
to automatically remove broken commands from your commandline history.
To send the output of a pipe to the LLM when completing a command, use the
preview_pipe
option.
[fish-ai]
preview_pipe = True
This will send the output of the longest consecutive pipe after the last
unterminated parenthesis before the cursor. For example, if you autocomplete
az vm list | jq
, the output from az vm list
will be sent to the LLM.
This behaviour is disabled by default, as it may slow down the completion process and lead to commands being executed twice.
You can switch between different sections in the configuration using the
fish_ai_switch_context
command.
When using the plugin, fish-ai
submits the name of your OS and the
commandline buffer to the LLM.
When you codify or complete a command, it also sends the contents of any
files you mention (as long as the file is readable), and when you explain
or complete a command, the output from <command> --help
is provided to
the LLM for reference.
fish-ai
can also send an exerpt of your commandline history
when completing a command. This is disabled by default.
Finally, to fix the previous command, the previous commandline buffer, along with any terminal output and the corresponding exit code is sent to the LLM.
If you are concerned with data privacy, you should use a self-hosted LLM. When hosted locally, no data ever leaves your machine.
The plugin attempts to redact sensitive information from the prompt
before submitting it to the LLM. Sensitive information is replaced by
the <REDACTED>
placeholder.
The following information is redacted:
- Passwords and API keys supplied on the commandline.
- Base64 encoded data in single or double quotes.
- PEM-encoded private keys.
If you want to contribute, I recommend to read ARCHITECTURE.md
first.
This repository ships with a devcontainer.json
which can be used with
GitHub Codespaces or Visual Studio Code with
the Dev Containers extension.
To install fish-ai
from a local copy, use fisher
:
fisher install .
Enable debug logging by putting debug = True
in your fish-ai.ini
.
Logging is done to syslog by default (if available). You can also enable
logging to file using log = <path to file>
, for example:
[fish-ai]
debug = True
log = ~/.fish-ai/log.txt
The installation tests
are packaged into containers and can be executed locally with e.g. docker
.
docker build -f tests/ubuntu/Dockerfile .
docker build -f tests/fedora/Dockerfile .
docker build -f tests/archlinux/Dockerfile .
The Python modules containing most of the business logic can be tested using
pytest
.
A release is created by GitHub Actions when a new tag is pushed.
set tag (grep '^version =' pyproject.toml | \
cut -d '=' -f2- | \
string replace -ra '[ "]' '')
git tag -a "v$tag" -m "π v$tag"
git push origin "v$tag"