GPTel is a simple, no-frills ChatGPT client for Emacs.
intro-demo.mp4
intro-demo-2.mp4
- Requires an OpenAI API key.
- It’s async and fast, streams responses.
- Interact with ChatGPT from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
- ChatGPT’s responses are in Markdown or Org markup.
- Supports conversations and multiple independent sessions.
- Save chats as regular Markdown/Org/Text files and resume them later.
- You can go back and edit your previous prompts, or even ChatGPT’s previous responses when continuing a conversation. These will be fed back to ChatGPT.
GPTel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
- Breaking Changes
- Installation
- Usage
- Using it your way
- Additional Configuration
- Why another ChatGPT client?
- Will you add feature X?
- Alternatives
- Acknowledgments
gptel-api-key-from-auth-source
now searches for the API key using the value ofgptel-host
, i.e. “api.openai.com” instead of the original “openai.com”. You need to update your~/.authinfo
.
GPTel is on MELPA. Install it with M-x package-install⏎
gptel
.
(Optional: Install markdown-mode
.)
Clone or download this repository and run M-x package-install-file⏎
on the repository directory.
Installing the markdown-mode
package is optional.
In packages.el
(package! gptel)
In config.el
(use-package! gptel
:config
(setq! gptel-api-key "your key"))
After installation with M-x package-install⏎
gptel
- Add
gptel
todotspacemacs-additional-packages
- Add
(require 'gptel)
todotspacemacs/user-config
Procure an OpenAI API key.
Optional: Set gptel-api-key
to the key. Alternatively, you may choose a more secure method such as:
- Storing in
~/.authinfo
. By default, “api.openai.com” is used as HOST and “apikey” as USER.machine api.openai.com login apikey password TOKEN
- Setting it to a function that returns the key.
- Select a region of text and call
M-x gptel-send
. The response will be inserted below your region. - You can select both the original prompt and the response and call
M-x gptel-send
again to continue the conversation. - Call
M-x gptel-send
with a prefix argument to
- set chat parameters (GPT model, directives etc) for this buffer,
- to read the prompt from elsewhere or redirect the response elsewhere,
- or to replace the prompt with the response.
With a region selected, you can also rewrite prose or refactor code from here:
Code:
Prose:
- Run
M-x gptel
to start or switch to the ChatGPT buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (C-u M-x gptel
) to start a new session. - In the gptel buffer, send your prompt with
M-x gptel-send
, bound toC-c RET
. - Set chat parameters (GPT model, directives etc) for the session by calling
gptel-send
with a prefix argument (C-u C-c RET
):
That’s it. You can go back and edit previous prompts and responses if you want.
The default mode is markdown-mode
if available, else text-mode
. You can set gptel-default-mode
to org-mode
if desired.
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on gptel-mode
before editing the buffer.
GPTel’s default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it.
If you want custom behavior, such as
- reading input from or output to the echo area,
- or in pop-up windows,
- sending the current line only, etc,
GPTel provides a general gptel-request
function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by gptel-send
. See the documentation of gptel-request
, and the wiki for examples.
These are packages that depend on GPTel to provide additional functionality
- gptel-extensions: Extra utility functions for GPTel.
- ai-blog.el: Streamline generation of blog posts in Hugo.
gptel-host
: Overrides the OpenAI API host. This is useful for those who transform Azure API into OpenAI API format, utilize reverse proxy, or employ third-party proxy services for the OpenAI API.gptel-proxy
: Path to a proxy to use for GPTel interactions. This is passed to Curl via the--proxy
argument.
Other Emacs clients for ChatGPT prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
- Something that is as free-form as possible: query ChatGPT using any text in any buffer, and redirect the response as required. Using a dedicated
gptel
buffer just adds some visual flair to the interaction. - Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way ChatGPT can generate code blocks that I can run.
Maybe, I’d like to experiment a bit more first. Features added since the inception of this package include
- Curl support (
gptel-use-curl
) - Streaming responses (
gptel-stream
) - Cancelling requests in progress (
gptel-abort
) - General API for writing your own commands (
gptel-request
, wiki) - Dispatch menus using Transient (
gptel-send
with a prefix arg) - Specifying the conversation context size
- GPT-4 support
- Response redirection (to the echo area, another buffer, etc)
- A built-in refactor/rewrite prompt
- Limiting conversation context to Org headings using properties (#58)
- Saving and restoring chats (#17)
Features being considered or in the pipeline:
- Fully stateless design (#17)
Other Emacs clients for ChatGPT include
- chatgpt-shell: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- org-ai: Interaction through special
#+begin_ai ... #+end_ai
Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: chatgpt-arcana, leafy-mode, chat.el
- Alexis Gallagher and Diego Alvarez for fixing a nasty multi-byte bug with
url-retrieve
. - Jonas Bernoulli for the Transient library.