interefence between rooms
Closed this issue ยท 21 comments
Hi there,
I found if I have two or more rooms chatting with say Bing Chat, there are some interefence between them. It seems the bot considers it's talking to the same person. This could be a privacy issue, if two people chatting with the bot and one guy can even guess what questions the other guy asked. Is there possible to isolate rooms? Cheers.
It's possible but at the moment, the simplest way is deploy several matrix_chatgpt_bots, node-chatgpt-api instances and limit bot only working on specific room by setting room_id
configuration.
In this case, I assume we need different tokens for different instances? Like different Microsoft account for different instances?
How much computer resources will each instance require?
Thanks
No need to use different tokens, just one token is enough.
You can use docker stats
to measure compute resource occupation.
Here is a sample compose file
If you use env file
Add ROOM_ID
to env1,env2,env3
If you use config.json
Add room_id
to config1.json, config2.json, config3.json
services:
app1:
image: hibobmaster/matrixchatgptbot:latest
container_name: matrix_chatgpt_bot_1
restart: always
# build:
# context: .
# dockerfile: ./Dockerfile
env_file:
- .env1
volumes:
# use env file or config.json
# - ./config1.json:/app/config.json
# use touch to create an empty file db, for persist database only
- ./db1:/app/db
# import_keys path
# - ./element-keys.txt:/app/element-keys.txt
networks:
- matrix_network
app2:
image: hibobmaster/matrixchatgptbot:latest
container_name: matrix_chatgpt_bot_2
restart: always
# build:
# context: .
# dockerfile: ./Dockerfile
env_file:
- .env2
volumes:
# use env file or config.json
# - ./config2.json:/app/config.json
# use touch to create an empty file db, for persist database only
- ./db2:/app/db
# import_keys path
# - ./element-keys.txt:/app/element-keys.txt
networks:
- matrix_network
app3:
image: hibobmaster/matrixchatgptbot:latest
container_name: matrix_chatgpt_bot_3
restart: always
# build:
# context: .
# dockerfile: ./Dockerfile
env_file:
- .env3
volumes:
# use env file or config.json
# - ./config3.json:/app/config.json
# use touch to create an empty file db, for persist database only
- ./db3:/app/db
# import_keys path
# - ./element-keys.txt:/app/element-keys.txt
networks:
- matrix_network
api:
# bing api
image: ghcr.io/waylaidwanderer/node-chatgpt-api:latest
container_name: node-chatgpt-api
restart: always
volumes:
- ./settings.js:/var/chatgpt-api/settings.js
networks:
- matrix_network
networks:
matrix_network:
One more suggestion, use a non-existed room_id at first time launch to let the bot sync and store message in db, then stop it and correct the room_id to let the bot work as usual so as not to mess up room chatting timeline.
sounds reasonable. Thank you.
https://github.com/matrixgpt/matrix-chatgpt-bot this works across different rooms, is there any solution to make your implementation work in a similar way where only one bot is created and different rooms can access it independently without cross talk
https://github.com/matrixgpt/matrix-chatgpt-bot this works across different rooms, is there any solution to make your implementation work in a similar way where only one bot is created and different rooms can access it independently without cross talk
I have to refactor the codes to archieve it, so at this time the simplest way is to launch serveral bot instances.
sure got it. amazing implementation, but to use within an application im working on, would be useful if there is one chatbot username that works across rooms. so ill stay tuned for any update. for now im thinking of using https://github.com/matrixgpt/matrix-chatgpt-bot for the chat management and replace the reverse proxy url with different models, as shown in this implementation, which works like a drop in replacement fro openai url: https://github.com/go-skynet/LocalAI
would be useful if there is one chatbot username that works across rooms.
When you launch serveral bot instances, you can use the same username.
The only difference among the instances is room_id
in config file.
oh interesting! thanks will do that
@jaodei @seshubonam
With pandora: https://github.com/pengzhile/pandora/blob/master/doc/wiki_en.md
I can integrated chatGPT web with session isolation like what i did for https://github.com/hibobmaster/mattermost_bot
Maybe there will be three new commands: !talk
, !goon
, !new
awesome!
with flowise i think that problem is solved automatically as flowise responses have sessions taken care. should run and see. waiting to host the flowise on my url. once done, ill update that
so, if I add "flowise_api_url": "http://localhost:3000/api/v1/prediction/6deb3c89-45bf-4ac4-a0b0-b2d5ef249d21" in the config file, I can avoid the intereference between rooms?
How can I obtian an id like 6deb3c89-45bf-4ac4-a0b0-b2d5ef249d21 here and the flowise_api_key? I need both of them?
thanks.
I can avoid the intereference between rooms?
No, you still need launch several instances. The only one that avoid the intereference between rooms is ChatGPT WEB used by !talk
!goon
!new
commands.
flowise_api_key
is optional.
6deb3c89-45bf-4ac4-a0b0-b2d5ef249d21
There is a code button in the upper right area. You can click it and it will tell you.
just on a side note, could adding new instances be automated. like, when user clicks create room button, a new config file or .env content is added
that work without having to restart any docker/cloud instances right
can i add a replit bounty to fix this, if you dont mind. would like to use it to beta test a chatbot im building. dm me on twitter please, id like some continued support, as your work is greaaaat https://twitter.com/seshubon
just on a side note, could adding new instances be automated. like, when user clicks create room button, a new config file or .env content is added
that work without having to restart any docker/cloud instances right
This solution is ugly, i will try to refactor codes to archive session isolation for all chat commands.
๐ ๐ sure thanks