/operateGPT

๐ŸŒŸ Revolutionize Your Operations with One Sentence Automation: Utilizing large language models and Multi-Agents to generate operational copy, images, and videos with one-line requirements.

Primary LanguagePythonMIT LicenseMIT

OperateGPT: Revolutionize Your Operations with One-Line Requests

stars forks License: MIT License: MIT Release Notes Open Issues Open in GitHub Codespaces

็ฎ€ไฝ“ไธญๆ–‡ |Documents|WebSite

๐Ÿš€๐Ÿš€Experience Now!!

๐Ÿ”ฅ๐Ÿ”ฅLatest Release Version:V0.0.1

๐Ÿ”ฅ๐Ÿ”ฅMulti-Models Management

Using large language models and multi-agent technology, a single request can automatically generate marketing copy, images, and videos, and with one click, can be sent to multiple platforms, achieving a rapid transformation in marketing operations.

OperateGPT Process

Supported LLMs

LLM Supported Model Type Notes
ChatGPT โœ… Proxy Default
Bard โœ… Proxy
Vicuna-13b โœ… Local Model
Vicuna-13b-v1.5 โœ… Local Model
Vicuna-7b โœ… Local Model
Vicuna-7b-v1.5 โœ… Local Model
ChatGLM-6B โœ… Local Model
ChatGLM2-6B โœ… Local Model
baichuan-13b โœ… Local Model
baichuan2-13b โœ… Local Model
baichuan-7b โœ… Local Model
baichuan2-7b โœ… Local Model
Qwen-7b-Chat Coming soon Local Model

Supported Embedding Models

LLM Supported Notes
sentence-transformers โœ… Default
text2vec-large-chinese โœ…
m3e-large โœ…
bge-large-en โœ…
bge-large-zh โœ…

Installation

Firstly, download and install the relevant LLMs.

mkdir models & cd models

# Size: 522 MB
git lfs install 
git clone https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2

# [Optional]
# Size: 94 GB, supported run in cpu model(RAM>14 GB). stablediffusion-proxy service is recommended, https://github.com/xuyuan23/stablediffusion-proxy
git lfs install 
git clone https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0

# [Optional]
# Size: 16 GB, supported run in cpu model(RAM>16 GB). Text2Video service is recommended. https://github.com/xuyuan23/Text2Video
git lfs install
git clone https://huggingface.co/cerspense/zeroscope_v2_576w

Then, download dependencies and launch your project.

yum install gcc-c++
pip install -r requirements.txt

# copy file `.env.template` to new file `.env`, and modify the params in `.env`.
cp .env.template .env 

[Options]
# deploy stablediffusion service, if StableDiffusion proxy is used, no need to execute it!
python operategpt/providers/stablediffusion.py

[Options]
# deploy Text2Video service, if Text2Video proxy server is used, no need to execute it!
python operategpt/providers/text2video.py

# Quick trial: two params: idea and language, `en is default`, also supported zh(chinese).
python main.py "Prepare a travel plan to Australia" "en"

Configuration

  • By default, ChatGPT is used as the LLM, and you need to configure the OPEN_AI_KEY in .env
OPEN_AI_KEY=sk-xxx

# If you don't deploy stable diffusion service, no image will be generated.
SD_PROXY_URL=127.0.0.1:7860

# If you don't deploy Text2Video service, no videos will be generated.
T2V_PROXY_URL=127.0.0.1:7861
  • More Details see file .env.template

Generated DEMOs