/emacs-speech-input

Set of packages for speech and voice inputs in Emacs

Primary LanguageCGNU General Public License v3.0GPL-3.0

Emacs Speech Input

GitHub

Set of packages for speech and voice inputs in Emacs. Use cases are explained next with the packages they are available in:

Dictation with Real-Time Editing

esi-dictate is aimed to help you input faster[fn::Needs more work and empirical validation.] than simply using voice with post-edits or typing in. It allows blended workflows of voice and keyboard inputs assisted via LLMs for real time edits.

At the very basic, it’s a dictation tool allowing real-time edits. Most of the content gets inserted in the buffer via voice at a voice cursor. Utterances are automatically inferred as having edit suggestions, in which case, the current voice context (displayed differently) is edited via an LLM.

While working, the keyboard cursor is kept free so you can keep making finer edits independent of the dictation work.

Some more notes on the design are in my blog post here.

Usage

Presently this systems uses Deepgram via a Python script dg.py that you need to put in your PATH somewhere. Python >=3.10 is required. Afterwards, you would need to set the following configuration:

(use-package esi-dictate
  :vc (:fetcher github :repo lepisma/emacs-speech-input)
  :custom
  (esi-dictate-dg-api-key "<DEEPGRAM-API-KEY>")
  (esi-dictate-llm-provider <llm-provider-using-llm.el>)
  ; Here is an example configuration for using OpenAI's
  ; (esi-dictate-llm-provider (make-llm-openai :key "<OPENAI-API-KEY>" :chat-model "gpt-4o-mini"))
  :bind (:map esi-dictate-mode-map
              ("C-g" . esi-dictate-stop))
  :config
  (setq llm-warn-on-nonfree nil)
  :hook (esi-dictate-speech-final . esi-dictate-fix-context))

Start dictation mode using esi-dictate-start. Use esi-dictate-stop to exit dictation mode. esi-dictate-fix-context is hooked to utterance end and does general fixes without asking for explicit commands. If you mark a region and call esi-dictate-move-here, the voice context and cursor will move to the region bounds. If you just need to move the voice cursor, call move here function after moving point to the appropriate position.

As of now this uses non-free models for both transcriptions (Deepgram) and edits (GPT4o-mini). This will change in the future.

Customization

To improve LLM performance, you can customize the esi-dictate-llm-prompt and add few shot examples in esi-dictate-fix-examples.

For visual improvements, customize esi-dictate-cursor, esi-dictate-cursor-face, esi-dictate-intermittent-face, and esi-dictate-context-face.

Flight-mode Recording

This is available as a dynamic module (esi-core) which needs a bit of clean up before I document the usage.