/Openaibot

Gpt-3.5-turbo ChatGPT Bot/Voice Assistant | 📱 Cross-Platform | 🦾 Async | 🗣 Good Contextual Support | 🌻 sh & docker Deployment| 🔌 API Server Provided| 🎤 Azure/Vits for Voice Chatting |🌎 Real-time Information Searching| 📷 Multi-modal/Image Understanding | 💐 Self-maintained LLM Framework

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

cover

License Python SPONSOR

OpenaiBot

中文说明

直觉性设计的全平台Bot,多轮会话管理,多模态(语音/图片)理解,自维护套件,交叉追溯回复。

If you don't have the instant messaging platform you need or you want to develop a new application, you are welcome to contribute to this repository.

You can develop a new Controller by using Event.py.

Compatibility with multiple LLMs and integration with GPT and third-party systems is handled by our llm-kira project on GitHub.

Please submit an issue/discussion if you have a deployment issue rather than emailing me

🥽 Feature

  • It can accurately limit billing, with limits and ID binding. ⚡️
  • Supports asynchronous operations and can handle multiple requests simultaneously. 🚀
  • Allows for private and group chats, catering to different scenarios. 💬
  • Implements chat rate limiting to avoid overly frequent requests. ⏰
  • Provides entertainment and interactive features, allowing for proactive engagement with users. 🎉
  • Includes blacklists, whitelists, and quota systems to control conversation partners. 🔒
  • Designed for full compatibility and strong scalability, adapting to different application scenarios. 🔌
  • Features a memory pool that guarantees the storage of context memory for up to 1000 rounds, with dynamic construction. 💾
  • Supports management, roles, and custom writing styles, providing more personalized options. 🤖
  • Integrates Azure and Whisper local voice assistants, offering more ways for voice interaction. 🗣
  • Allows for polling of multiple API keys for easy management, with automatic expiration reminders. 📊
  • Supports multimodal interaction, including image Blip comprehension support, speech recognition, and sticker support. 👂😎
  • Has a sound content safety system, including removable content safety components and official API filtering of content. 🔒
  • Theoretically supports cross-platform access to any chat platform. 🌐
  • The intuitive design of the chat allows for cross-replying, retracing replies, trigger-based replies, and the use of stickers for added fun. 😊
  • Theoretical abstract design of third-party information injection LLM, supporting real-time content and automatic injection of the latest information to assist in answering. 🕸
  • Self-maintaining model framework that supports any LLM model and any external API integration, abstracting and unifying access to GPT3 and GPT3.5. 🤖

🪜 Deploy It

🔨 Check

Make sure your server has 1GB of RAM and 10GB of free storage.

For Arm architecture servers: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh (The setup.sh can now automatically install rust.)

📦 Deploy/Renew

curl -LO https://raw.githubusercontent.com/LLMKira/Openaibot/main/setup.sh && sh setup.sh

For Chinese users

curl -LO https://raw.kgithub.com/LLMKira/Openaibot/main/setup.sh && sh setup.sh

Or Docker Deploy

🍽 Configure

  • set Redis
apt-get install redis
systemctl enable redis.service --now
  • edit bot config
cp Config/app_exp.toml Config/app.toml

nano Config/app.toml
# Comment out which part you don't want to start
# 注释你不需要的部分

# QQ Bot
[Controller.QQ]
master = [114, 514] # QQ number
account = 0  # Bot s QQ number
http_host = 'http://localhost:8080'   # Mirai http Server
ws_host = 'http://localhost:8080'   # Mirai Websocket Server
verify_key = ""
trigger = false # Proactive response when appropriate
INTRO = "POWER BY OPENAI"  # Suffixes for replies
ABOUT = "Created by github.com/LLMKira/Openaibot" # /about
WHITE = "Group NOT in WHITE list" # Whitelist/Blacklist tips
# Proxy set, but does not proxy openai api, only bot
proxy = { status = false, url = "http://127.0.0.1:7890" }

# Telegram Bot
[Controller.Telegram]
master = [114, 514] # User Id @JsonDumpBot
botToken = '' # Bot Token @botfather
trigger = false
INTRO = "POWER BY OPENAI"
ABOUT = "Created by github.com/LLMKira/Openaibot"
WHITE = "Group NOT in WHITE list"
# 设置的代理,只代理 bot  openai api->service.json 
proxy = { status = false, url = "http://127.0.0.1:7890" }

# 基础对话事件服务器,Web支持或者音箱用&Use by Voice Assistant
[Controller.BaseServer]
host = "127.0.0.1"
port = 9559

If you want configure the backend or openai proxy. Please Check Deploy Docs

🪶 App Token

  • Telegram

Telegram BotToken Request

Make sure the bot is a group admin or privacy mode is turned off.

  • QQ

Configuring the QQ bot

🌻 Run Bot

Our robots can be started in multiple processes.

apt install npm
npm install pm2@latest -g
# or
yarn global add pm2

# test bot
python3 main.py

# run bot
pm2 start pm2.json


pm2 status

# stop bot
pm2 stop pm2.json
pm2 stop xx(id)
pm2 restart x(id)

Once configured, send a message and use the /add_white_user command to add your platform ID returned by the bot to the whitelist and you will be able to talk. Or use /close_group_white_mode to turn off the bot's group whitelist mode.

🎤 Or Run Voice Assistant

In addition to the robot, we also have a voice assistant.

Voice Assistant is a web-dependent voice assistant that you can easily run on small devices through Azure or Openai's recognition services.

  • Run BaseEvent Server
# 基础对话事件服务器,Web支持或者音箱用
[Controller.BaseServer]
port = 9559
  • Run Vits Server

https://github.com/LlmKira/MoeGoe

  • Run Assistant
cd Assistant
cat install.md
pip3 install -r requirements.txt
python3 clinet.py

🥕 Add Api Key

Use /add_api_key Command add OpenaiKey to Config/api_keys.json.

🫧 About ID

You'll be wondering about our multi-platform ID system. This is how we store your ID in our json/database: real_id + suffix.

  • toml

Use your real ID in app.toml, which is the whitelist prompt without the suffix.

  • json/command

When using the user/group authorization command, you need to follow the real ID with the corresponding suffix ID.

Controller suffix_id desc
QQ 101
Telegram 100
Api 103

🥪 About Models

ChatGpt

models token limit cost
gpt-3.5-turbo 4095 optimized for chat at 1/10th the cost of text-davinci-003.
gpt-3.5-turbo-0301 4095 optimized for chat at 1/10th the cost of text-davinci-003.

GPT3

models token limit cost
code-davinci-002 8000 During this initial limited beta period, Codex usage is free.
code-cushman-001 2048 During this initial limited beta period, Codex usage is free.
text-davinci-003 4000 $0.0200 /1K tokens
text-curie-001 2048 $0.0020 /1K tokens
text-babbage-001 2048 $0.0005 /1K tokens
text-ada-001 2048 $0.0004 /1K tokens

Keyword filtering configuration

To prevent malicious inducement and attacks, we have two measures: OpenAI TosApi filtering and simplified/traditional Chinese keyword filtering.

Simplified/traditional Chinese keyword filtering(Only For Chinese)

If not exist, the sensitive word file will be automatically initialized into Data/Danger.form, and it can be disabled by clearing it to one line.

OpenAI TosApi filtering

Leave the inspection type array empty to disable the filter.

Please see Service Configuration Guide.

🌽 /Config File

Our llm-kira dependency library is stored in the current package directory when there is no Redis support.

The application itself is stored in Redis for robustness, except for api_keys.json, service.json and assistants.json.

If you have config.json, the application will automatically initialise this file. And you can update the configuration to this file using the /config command.

🎸 Command

Due to lack of maintainers, some commands only work on some platforms.

chat - talk
write - continue writing
forgetme - reset memory
remind - Scene setting cancel overwrite with short text
voice - voice support
style - set the preferred word

trigger - Admin initiates unsolicited responses
trace - Admin activates automatic tracking of associated channels
cross - whether the Admin starts a cross-response
silent - Admin starts silent error reporting

auto_adjust - automatic optimizer
set_user_cold - set user cooldown
set_group_cold - set group cooldown
set_token_limit - set output limit length
set_input_limit - set input limit length
see_api_key - Several Api keys now
del_api_key - Delete Api key
add_api_key - add Api key
config - get/backup hot configuration file
set_per_user_limit - set normal user limit
set_per_hour_limit - set user hour limit
promote_user_limit - Promote user limit
reset_user_usage - Reset user usage
add_block_group - block group
del_block_group - Unblock group
add_block_user - block user
del_block_user - Unblock user
add_white_group - add whitelist group
add_white_user - add whitelist user
del_white_group - delist whitelist group
del_white_user - remove whitelist user
update_detect - update sensitive words
open_user_white_mode - open user whitelist
open_group_white_mode - open group whitelist
close_user_white_mode - close user whitelist
close_group_white_mode - close group whitelist
open - open the robot
close - close the robot
change_head - set head switch
change_style - set the style switch
help - help

🧀 More Docs

Details On Deploy Guide

Network Plugins/Proxy Settings/Custom Model Names/Speech Services/Picture Understanding/Censor Configuration Please see Service Configuration Guide

详细接口/服务配置/自定义 请查看文档 Deploy Guide

贴纸设置/代理设置/切换其他模型/语音服务/图片理解/审查配置 请查看 服务配置

🤗 Join Our Community

❤ Thanks

🍞 Other similar projects

  • ChatGPT Mirai Bot is a QQ bot based on the ChatGPT Web Api& Edge Api & GPT3.5

https://github.com/lss233/chatgpt-mirai-qq-bot

📃 License

This project open source and available under
the [AGPL License](https://github.com/LLMKira/Openaibot/blob/main/LICENSE).

CLAUSE 说明了如何授权,声明,附加条款等内容。

Fossa

FOSSA Status

You wouldn't believe it, but Ai also wrote part of this Readme