mekb-turtle/discord-ai-bot

Docker

Opened this issue · 12 comments

Can we please have docker image for the same, i have no i dea how to do it and i dont seem to find the appropriate resources. Docker will be helpful considering it can just autostart on reboot and you can use your desktop without a terminal window always open

I'd be willing to contribute a pull request for this just at tag me if you want it done @mekb-turtle

To get this done all we need to do is add a Dockerfile and though not necessary a docker-compose file too :D. I'd be willing also to either create a make file that would allow you to just run something like make docker to run the project in docker. If you wanted to host the docker image I'd leave that up to you to do, but I could get it started by just allowing docker on a local version of the pulled code.

would there be a way for docker to access ollama? perhaps have it in the same docker container? we would need cuda or whatever the amd one is if the user wants to use gpu support

would there be a way for docker to access ollama? perhaps have it in the same docker container? we would need cuda or whatever the amd one is if the user wants to use gpu support

I don't think you need cuda for docker, because ollama can be hosted on the host machine, so just cuda on the host machine would have to be setup, but yes that does seem to be an issue I always struggle with solving the right way, so that is a very good question. The issue would be how do we let the docker container talk back to ollama on the host machine without going through the trouble of hosting ollama in a docker container, because if we make ollama be in the container we would then need to make the user setup their CUDA for Docker stuff which I hardly ever want to touch (just cause I've tried it before and it didn't end so well on my linux machine... 😬). I know ollama-webui solves this well with their docker container I'll go check out what they do.

Okay it appears they use the special hostname that's available in docker as: http://host.docker.internal:11434 to connect to Ollama. This also seems recommended by docker to connect a service in a container to a service on the host on all platforms.

So all we need to do is if a user uses the compose file we could add in the env variable automatically for the ollama endpoint at http://host.docker.internal:11434 to the env field in the docker-compose service for discord-ai-bot, or if they would rather just run the docker way only without compose then we would suggest to them to set that env variable accordingly 😄

I'm still willing to contribute code if you are up for it, but just give me the green light and I'll create a pull request. Thank you!

yep that sounds good. does docker let you clone a repo then edit a file that's in it? perhaps run something like sed -i 's|^OLLAMA=http://localhost:11434$|OLLAMA=http://host.docker.internal:11434|' .env

Your docker compose doesnt seem to work for me
Uploading Screenshot_from_2024-01-11_23-18-10.png…

@kallmesid sorry, I would like to help, but I can't see your screenshot for some reason. Can you paste the error it's showing? Also did you make sure to follow the readme's instructions (you'll still have to setup your bot on discord and setup the .env file accordingly)?

running mistral-instruct on ollama at 10.8.0.18:11434 on docker on cpu
custom system prompt on mistral with the model name saved as lupin on ollama
running this both via makefile and normal docker compose gives this error

.env file:

# Use the system message above? (true/false)
USE_SYSTEM=false

# Use the model's system message? (true/false) If both are specified, model system message will be first
USE_MODEL_SYSTEM=false

# Require users to mention the bot to interact with it? (true/false)
REQUIRES_MENTION=true

# Whether to show a message at the start of a conversation
SHOW_START_OF_CONVERSATION=false

# Whether to use a random Ollama server or use the first available one
RANDOM_SERVER=false

# Whether to add a message before the first prompt of the conversation
INITIAL_PROMPT=""
USE_INITIAL_PROMPT=false
kallmesid@kallmesid:~/AI/lupin doc$ docker compose -p discord-ai up
[+] Running 1/0
 ✔ Container discord-ai-bot-1  Recreated                                                                                      0.0s 
Attaching to discord-ai-bot-1
discord-ai-bot-1  | [Shard Manager] [INFO] Loading
discord-ai-bot-1  | [Shard #0] [INFO] Created shard
discord-ai-bot-1  | undefined:1
discord-ai-bot-1  | "Inline code blocks are supported by surrounding text in backticks, e.g `print("Hello");`, block code is supported by surrounding text in three backticks, e.g ```print("Hello");```."
discord-ai-bot-1  |                                                                                 ^
discord-ai-bot-1  | 
discord-ai-bot-1  | SyntaxError: Unexpected non-whitespace character after JSON at position 80
discord-ai-bot-1  |     at JSON.parse (<anonymous>)
discord-ai-bot-1  |     at file:///src/bot.js:164:23
discord-ai-bot-1  |     at Array.map (<anonymous>)
discord-ai-bot-1  |     at parseJSONMessage (file:///src/bot.js:163:31)
discord-ai-bot-1  |     at parseEnvString (file:///src/bot.js:172:3)
discord-ai-bot-1  |     at file:///src/bot.js:175:29
discord-ai-bot-1  |     at ModuleJob.run (node:internal/modules/esm/module_job:218:25)
discord-ai-bot-1  |     at async ModuleLoader.import (node:internal/modules/esm/loader:329:24)
discord-ai-bot-1  |     at async loadESM (node:internal/process/esm_loader:28:7)
discord-ai-bot-1  |     at async handleMainPromise (node:internal/modules/run_main:113:12)
discord-ai-bot-1  | 
discord-ai-bot-1  | Node.js v20.11.0
discord-ai-bot-1  | /node_modules/discord.js/src/sharding/Shard.js:178
discord-ai-bot-1  |         reject(new DiscordjsError(ErrorCodes.ShardingReadyDied, this.id));
discord-ai-bot-1  |                ^
discord-ai-bot-1  | 
discord-ai-bot-1  | Error [ShardingReadyDied]: Shard 0's process exited before its Client became ready.
discord-ai-bot-1  |     at Shard.onDeath (/node_modules/discord.js/src/sharding/Shard.js:178:16)
discord-ai-bot-1  |     at Object.onceWrapper (node:events:633:26)
discord-ai-bot-1  |     at Shard.emit `(node:events:518:28)`
discord-ai-bot-1  |     at Shard._handleExit (/node_modules/discord.js/src/sharding/Shard.js:439:10)
discord-ai-bot-1  |     at ChildProcess.emit (node:events:518:28)
discord-ai-bot-1  |     at ChildProcess._handle.onexit (node:internal/child_process:294:12) {
discord-ai-bot-1  |   code: 'ShardingReadyDied'
discord-ai-bot-1  | }
discord-ai-bot-1  | 
discord-ai-bot-1  | Node.js v20.11.0
discord-ai-bot-1 exited with code 0

I was having a similar issue. I actually had to change the SYSTEM variable line in the .env to this:

.env

# System message that the language model can understand
# Feel free to change this
SYSTEM="The current date and time is <date>.

Basic markdown is supported.
Bold: **bold text here**
Italics: _italic text here_
Underlined: __underlined text here__
Strikethrough: ~~strikethrough text here~~
Spoiler: ||spoiler text here||
Block quotes: Start the line with a > followed by a space, e.g
> Hello there

Inline code blocks are supported by surrounding text in backticks, e.g `print('Hello');`, block code is supported by surrounding text in three backticks, e.g ```print('Hello');```.
Surround code that is produced in code blocks. Use a code block with three backticks if the code has multiple lines, otherwise use an inline code block with one backtick.

Links are supported by wrapping the text in square brackets and the link in parenthesis, e.g [Example](https://example.com)

Lists are supported by starting the line with a dash followed by a space, e.g - List
Numbered lists are supported by starting the line with a number followed by a dot and a space, e.g 1. List.
Images, links, tables, LaTeX, and anything else is not supported.

If you need to use the symbols >, |, _, *, ~, @, #, :, `, put a backslash before them to escape them.

If the user is chatting casually, your responses should be only a few sentences, unless they are asking for help or a question.
Don't use unicode emoji unless needed."

The only changed part is print('Hello'); and print('Hello'); where the double quotes were the issue.

Since you're getting this issue too maybe I should update the .example.env to reflect that.

thank you so much for this.
It works now but my setup doesnt seem to like the OLLAMA=http://host.docker.internal:11434 in the compose file.
i simply put my ip in it seems to be working that way so maybe thats smthg you might wanna take a look into

Hmmm interesting @kallmesid. I'm not exactly sure how to fix this as I don't know how to recreate it at the moment, but if you don't mind we could go back and forth to try and find a solution. Do you know where your ollama is running (as a service in a docker container, or as a service on your host machine?)

sure. im running CasaOS for managing my docker containers and the ollama

compose file looks like this
Pastebin

both ollama and the discord ai bot are in docker

i believe the problem seems to be when you(i don't fully understand this but) route the api call through the internal docker network my setup dont like it and when i edit the compose file i have it route it through my normal physical network which works for my personal setup