/plock

Use an LLM from anywhere you can type (uses Ollama by default, but can be used with any LLM)

Primary LanguageRustMIT LicenseMIT

Plock

Use an LLM directly from literally anywhere you can type.

Write a prompt, select it, and hit Cmd+Shift+.. It will replace your prompt with the output in a streaming fashion.

Also! You can first put something on your clipboard (as in copy some text) before writing / selecting your prompt, and it Cmd+Shift+/ and it will use the copied text as context to answer your prompt.

For Linux, use Ctrl instead of Cmd.

100% Local by default. (If you want to use an API or something, you can call any shell script you want - just set USE_OLLAMA to false)

Note: Something not work properly? I won't know! Please log an issue or take a crack at fixing it yourself and submitting a PR! Have feature ideas? Log an issue!

Demo using Ollama

(in the video I mention rem, another project I'm working on)

Demo using GPT-3.5 and GPT-4

If you are going to use this with remote APIs, consider environment variables for your API keys... make sure they exist wherever you launch, or directly embed them (just don't push that code anywhere)

Getting Started

Install ollama and make sure to run ollama pull openhermes2.5-mistral or swap it out in the code for something else.

Launch "plock"

Shortcuts:

Ctrl / Cmd + Shift + .: Replace the selected text with the output of the model.

Ctrl / Cmd + Shift + /: Feed whatever is on your clipboard as "context" and the replace the selected text with the output of the model.

Escape: Stop any streaming output

Mac will request access to keyboard accessibility.

Linux (untested), may require X11 libs for clipboard stuff and key simulation using enigo. Helpful instructions

Also system tray icons require some extras

Windows (untested), you'll need to swap out Ollama for something else, as it doesn't support windows yet.

Building Plock

If you don't have apple silicon or don't want to blindly trust binaries (you shouldn't), here's how you can build it yourself!

Prerequisites

  • Node.js (v14 or later)
  • Rust (v1.41 or later)
  • Bun (latest version)

Installation Steps

Node.js

Download from: https://nodejs.org/

Rust

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env

Bun

curl https://bun.sh/install | bash

Project Setup

git clone <repo_url>
cd path/to/project
bun install
bun run tauri dev

Build

bun run tauri build

Another demo

Another demo where I use the perplexity shell script to generate an answer super fast. Not affiliated, was just replying to a thread lol

Screen.Recording.2024-01-21.at.7.21.53.PM.mov

Secrets

Curious folks might be wondering what ocr feature is. I took a crack at taking a screenshot, running OCR, and using that for context, instead of copying text manually. Long story short, rusty-tesseract really dissapointed me, which is awkward b/c it's core to xrem.

If someone wants to figure this out... this could be really cool, especially with multi-modal models.