- [March 2024] Add gemma-7b and qwen-7b models(based on Ollama)
- [February 2024] Add mistral-7b model (based on Ollama)
- [February 2024] Add gemini-pro model (based on Open API)
- [January 2024] refactor the config-template.yaml to control the backend and the frontend settings at the same time, click to find more introduction about the
config-template.yaml
- [January 2024] Add internlm2-chat-7b model (based on LMDeploy)
- [January 2024] Released version v0.0.1, officially open source!
AOE, an acronym from DOTA2 for Area Of Effect, denotes an ability that can affect a group of targets within a certain area. Here, AOE in AI implies that user can obtain parallel outputs from multiple LLMs with one single prompt at the same time.
Currently, there are many open-source frameworks based on the ChatGPT for chat, but the LGC(LLM Group Chat) framework is still not coming yet.
The emergence of OpenAOE fills this gap: OpenAOE can help LLM researchers, evaluators, engineering developers, and even non-professionals to quickly access the market's well-known commercial and open-source LLMs, providing both single model serial response mode and multi-models parallel response mode.
OpenAOE can:
- return one or more LLMs' answers at the same time by a single prompt.
- provide access to commercial LLM APIs, with default support for gpt3.5, gpt4, Google Palm, Minimax, Claude, Spark, etc., and also support user-defined access to other large model APIs. (API keys need to be prepared in advanced)
- provide access to open-source LLM APIs. ( We recommend to use LMDeploy to deploy with one click)
- provide backend APIs and a WEB-UI to meet the needs of different requirements.
Tip
Require python >= 3.9
- clone this project
git clone https://github.com/internlm/OpenAOE
- build the frontend project when the frontend codes are changed (install node.js and npm first)
cd OpenAOE/openaoe/frontend
npm install
npm run build
cd OpenAOE # this OpenAOE is the clone directory
pip install -r openaoe/backend/requirements.txt
# add your api key in config first
python -u -m openaoe.main -f openaoe/backend/config/config-template.yaml
Tip
/path/to/your/config-template.yaml
is a configuration file loaded by OpenAOE at startup,
which contains the relevant configuration information for the LLMs,
including: API URLs, AKSKs, Tokens, etc.
A template configuration yaml file can be found in openaoe/backend/config/config-template.yaml
.
Note that, this config-template.yaml
DOES NOT contain any API access data, you should add them by yourself.
The technology stack we use includes:
- Backend framework based on python + fastapi;
- Frontend framework based on typescript + Sealion-Client (encapsulated based on React) + Sealion-UI.
- Build tools:
- conda: quickly create a virtual python env to install necessary packages
- npm: build the frontend project
Tip
The build tools can be installed quickly by pip install -U sealion-cli
- Frontend codes are in
openaoe/frontend
- Backend codes are in
openaoe/backend
- Project entry-point is
openaoe/main.py
- Add new model info like
name
,avatar
,provider
, etc inopenaoe/frontend/src/config/model-config.ts
- Add a new model basic API request payload configuration in
openaoe/frontend/src/config/api-config.ts
- Modify your new model's payload specifically in
openaoe/frontend/src/services/fetch.ts
, you may need to change the payload structure and handle corner cases according to your model's API definition.