/helix-gpt

Code assistant language server for Helix with support for Copilot/OpenAI/Codeium/Ollama

Primary LanguageTypeScriptMIT LicenseMIT

helix-gpt

Build Status Github Release

Code assistant language server for Helix with support for Copilot/OpenAI/Codeium.

Completion example

helix-gpt example

Code actions example (space + a)

helix-gpt example

Available code actions: resolveDiagnostics generateDocs improveCode refactorFromComment writeTest

How?

When a trigger character is pressed it will request a completion and use the entire file as context. Default triggers characters: ["{", "(", " "] can be overwritten with --triggerCharacters "{||(|| "

Use ctrl+x to manually trigger completions, and space+a to trigger code actions that only use the selected code as context.

Install

This was made to run with Bun, but you can also use a precompiled binary.

Without Bun

wget https://github.com/leona/helix-gpt/releases/download/0.34/helix-gpt-0.34-x86_64-linux.tar.gz \
-O /tmp/helix-gpt.tar.gz \
&& tar -zxvf /tmp/helix-gpt.tar.gz \
&& mv helix-gpt-0.34-x86_64-linux /usr/bin/helix-gpt \
&& chmod +x /usr/bin/helix-gpt

With Bun (tested with 1.0.25)

wget https://github.com/leona/helix-gpt/releases/download/0.34/helix-gpt-0.34.js -O /usr/bin/helix-gpt

Configuration

You can configure helix-gpt by exposing either the environment variables below, or by passing the command line options directly to helix-gpt in the Helix configuration step.

All configuration options

NOTE: Copilot is the best choice due to the model and implementation.

Environment Variables

OPENAI_API_KEY=123 # Required if using openai handler
COPILOT_API_KEY=123 # Required if using copilot handler
CODEIUM_API_KEY=123 # Not required, will use public API key otherwise.
HANDLER=openai # openai/copilot/codeium

Command Line Arguments

(Add to command = "helix-gpt" in Helix configuration)

--handler openai --openaiKey 123

You can also use:

helix-gpt --authCopilot

To fetch your Copilot token.

Helix Configuration

Example for TypeScript .helix/languages.toml tested with Helix 23.10 (older versions may not support multiple LSPs)

[language-server.gpt]
command = "helix-gpt"

[language-server.ts]
command = "typescript-language-server"
args = ["--stdio"]
language-id = "javascript"

[[language]]
name = "typescript"
language-servers = [
    "ts",
    "gpt"
]

In case you opt out of the precompiled binary, modify as follows:

[language-server.gpt]
command = "bun"
args = ["run", "/app/helix-gpt.js"]

All Done

If there are any issues, refer to the helix-gpt and Helix log files:

tail -f /root/.cache/helix/helix.log
tail -f /app/helix-gpt.log # Or wherever you set --logFile to

Special Thanks

  • rsc1975 for their Bun Dockerfile.

Todo

  • Copilot support
  • Resolve diagnostics code action
  • Self-hosted model support (partial support if they are openai compliant)
  • Inline completion provider (pending support from Helix)
  • Single config for all languages (pending #9318)
  • Support workspace commands to toggle functionality (pending Helix support for merging workspace commands)
  • Increase test coverage
  • Async load completions to show other language server results immediately (pending Helix support)
  • Improve recovery from errors as it can leave the editor unusable sometimes