llmcord.py lets you and your friends chat with LLMs directly in your Discord server. It works with practically any LLM, remote or locally hosted.
Reply-based chat system
Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains!
You can do things like:
- Continue your own conversation or someone else's
- "Rewind" a conversation by simply replying to an older message
- @ the bot while replying to any message in your server to ask a question about it
Additionally:
- Back-to-back messages from the same user are automatically chained together. Just reply to the latest one and the bot will see all of them.
- You can seamlessly move any conversation into a thread. Just create a thread from any message and @ the bot inside to continue.
llmcord.py supports remote models from OpenAI API, Mistral API, Anthropic API and many more thanks to LiteLLM.
Or run a local model with ollama, oobabooga, Jan, LM Studio or any other OpenAI compatible API server.
- Supports image attachments when using a vision model (like gpt-4o, claude-3, llava, etc.)
- Supports text file attachments (.txt, .py, .c, etc.)
- Customizable system prompt
- DM for private access (no @ required)
- User identity aware (OpenAI API only)
- Streamed responses (turns green when complete, automatically splits into separate messages when too long)
- Displays helpful user warnings when appropriate (like "Only using last 20 messages" when the customizable message limit is exceeded)
- Caches message data in a size-managed (no memory leaks) and mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls
- Fully asynchronous
- 1 Python file, ~200 lines of code
Before you start, install Python and clone this git repo.
-
Install Python requirements:
pip install -U -r requirements.txt
-
Create a copy of ".env.example" named ".env" and set it up (see below)
-
Run the bot:
python llmcord.py
(the invite URL will print to the console)
Setting | Instructions |
---|---|
DISCORD_BOT_TOKEN | Create a new Discord bot at discord.com/developers/applications and generate a token under the "Bot" tab. Also enable "MESSAGE CONTENT INTENT". |
DISCORD_CLIENT_ID | Found under the "OAuth2" tab of the Discord bot you just made. |
DISCORD_STATUS_MESSAGE | Set a custom message that displays on the bot's Discord profile. Max 128 characters. |
LLM | For LiteLLM supported providers (OpenAI API, Mistral API, ollama, etc.), follow the LiteLLM instructions for its model name formatting. For local models (oobabooga, Jan, LM Studio, etc.), set to local/openai/model (or local/openai/vision-model if using a vision model). Some setups will instead require local/openai/<MODEL_NAME> where <MODEL_NAME> is the exact name of the model you're using. |
LLM_SETTINGS | Extra API parameters for your LLM, separated by commas. Supports string, integer and float values. (Default: max_tokens=1024, temperature=1.0 ) |
LLM_SYSTEM_PROMPT | Write anything you want to customize the bot's behavior! |
LOCAL_SERVER_URL | The URL of your local API server. Only applicable when "LLM" starts with local/ .(Default: http://localhost:5000/v1 ) |
ALLOWED_CHANNEL_IDS | Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels. |
ALLOWED_ROLE_IDS | Discord role IDs that can use the bot, separated by commas. Leave blank to allow everyone. Specifying at least one role also disables DMs. |
MAX_TEXT | The maximum amount of text allowed in a single message, including text from file attachments. (Default: 100,000 ) |
MAX_IMAGES | The maximum number of image attachments allowed in a single message. Only applicable when using a vision model. (Default: 5 ) |
MAX_MESSAGES | The maximum number of messages allowed in a reply chain. (Default: 20 ) |
OPENAI_API_KEY | Only required if you choose a model from OpenAI API. Generate an OpenAI API key at platform.openai.com/account/api-keys. You must also add a payment method to your OpenAI account at platform.openai.com/account/billing/payment-methods. |
MISTRAL_API_KEY | Only required if you choose a model from Mistral API. Generate a Mistral API key at console.mistral.ai/api-keys. You must also add a payment method to your Mistral account at console.mistral.ai/billing. |
OPENAI_API_KEY and MISTRAL_API_KEY are provided as examples. Add more as needed for other LiteLLM supported providers.
-
If you're having issues, try my suggestions here
-
Only models from OpenAI are "user identity aware" because only OpenAI API supports the message "name" property. Hopefully others support this in the future.
-
PRs are welcome :)