中文教程见 我的博客
发 Issue 和 Discussion 之前请确保你已经搜索过 Issues 和 Discussions。否则会直接关闭。并且对于非 bug 和 新特性,你应该使用 Discussions 来提问。
Please make sure you have searched Issues and Discussions before creating a new one. Otherwise it will be closed directly. And for non-bug and new feature, you should use Discussions to ask questions.
Only for network programming learning purposes.
The previous version in Python is archieved in py branch, which is not stable and difficult to maintain (there are many problems with async in Python).
Provide an API nearly the same as OpenAI API (with gpt-3.5-turbo and gpt-4).
The only difference is that you should use the application token of Copilot instead of OpenAI token.
Download the latest release for your platform. Unzip it and run cogpt-get-apptoken
(or cogpt-get-apptoken.exe
on Windows).
You can set proxy through environment variables or command line arguments. To see the help of command line arguments, run ./cogpt-get-apptoken -h
.
GET /
- Returns
Hi, I'm CoGPT.
- Returns
GET /health
- Returns
{"status":"OK"}
- Returns
GET /v1/models
- return available models
POST /v1/chat/completions
- for chat api
POST /v1/embeddings
-
for embeddings api
Pay attention that this api is not totally compatible with OpenAI API.
Forinput
field, OpenAI API accepts following types:string
: The string that will be turned into an embedding.array
: The array of strings that will be turned into an embedding.array
: The array of integers that will be turned into an embedding.array
: The array of arrays containing integers that will be turned into an embedding.
Unfortunately, this service only accepts the first 2 types as well as the array of arrays containing strings.
-
This service is not designed to be deployed on public network.
The best way to use is to deploy it on your own computer or local network. Or you can deploy it on public network, but only for yourself.
DO NOT share your token with others. If a token is accessed from many different IPs, it will be banned. And if too many tokens are requested from one IP, something bad may happen.
So again, ONLY for yourself.
- Deploy locally on your own computer
- Deploy on local network for personal use or share in small group
- Deploy on your own server for personal use
- Provide public interface for everyone to use
In this way, many tokens will be requested from one IP, which will cause problems. - Provide public integrade (web) apps (such as ChatGPT-Next-Web)
Makint too many requests with one token will cause problems. - Deploy on with serverless services (such as Vercel)
Serverless services will change IP frequently, and they have short lifetime. - Any abuse of the service
DO NOT try any of theses approaches.
mkdir CoGPT && cd CoGPT
Then create a docker-compose.yml
file with following content:
version: '3'
services:
cogpt-api:
image: geniucker/cogpt:latest
environment:
- HOST=0.0.0.0
ports:
- 8080:8080
volumes:
- ./db:/app/db
- ./log:/app/log
restart: unless-stopped
container_name: cogpt-api
If you want to use development version, replace geniucker/cogpt:latest
with geniucker/cogpt:dev
.
By default, the service will listen on port 8080. If you want to change the port, edit docker-compose.yml
and change the port in ports
section. For example, if you want to listen on port 80, change 8080:8080
to 80:8080
.
Other config options can also be changed in environment
section. Or more conveniently, you can edit .env
file (You can copy .env.example
to .env
and edit it). Note that the config for db
and log
should be changed in volumes
section in docker-compose.yml
.
All config options are listed in Config.
Then run docker compose up -d
to start the service.
Download the latest release for your platform. Unzip it and run cogpt-api
(or cogpt-api.exe
on Windows).
By default, the service will listen on localhost:8080
. For configuration, see Config.
For Linux based on systemd, you can follow the steps below.
First, download the latest release for Linux. Unzip it and move cogpt-api
to /opt/cogpt/
and grant it executable permission.
Then copy the content of cogpt-api.service to /etc/systemd/system/cogpt-api.service
.
If you need to change the config, you should edit /opt/cogpt/.env
file.
Finally, run following commands to enable and start the service.
sudo systemctl enable cogpt-api
sudo systemctl start cogpt-api
Run sudo systemctl stop cogpt-api
to stop the service.
Run sudo systemctl disable cogpt-api
to disable the service.
For MacOS, services are based on launchd
.
First, download the latest release for MacOS. Unzip it and move cogpt-api
to /opt/cogpt/
and grant it executable permission.
Then copy the content of com.cogpt-api.plist to /Library/LaunchDaemons/com.cogpt-api.plist
.
If you need to change the config, you should edit /opt/cogpt/.env
file.
Finally, run sudo launchctl load /Library/LaunchDaemons/com.cogpt-api.plist
to start the service.
Run sudo launchctl unload /Library/LaunchDaemons/com.cogpt-api.plist
to stop the service.
For Windows, we can use scheduled tasks. You can follow the steps below.
First, download the latest release for Windows. Unzip it to a directory. Let's say C:\CoGPT\
.
Then create a file cogpt-api-service.ps1
in C:\CoGPT\
with copy content of cogpt-api-service.ps1 to it.
Start a PowerShell with administrator permission and run following commands.
cd C:\CoGPT\
./cogpt-api-service.ps1 enable
Here are all commands you can use. All commands should be run in PowerShell with administrator permission.
./cogpt-api-service.ps1 enable # enable and start the service
./copgt-api-service.ps1 disable # stop and disable the service
./cogpt-api-service.ps1 start # start the service
./cogpt-api-service.ps1 stop # stop the service
./cogpt-api-service.ps1 restart # restart the service
./cogpt-api-service.ps1 status # check the status of the service
If you want to share this service with your friends, it's not safe to directly share your GitHub app token. This feature is designed for this situation. You can create map that maps so-called share token to real GitHub app token.
The first way is to set environment variable or modify .env
environment variable file. You should set SHARE_TOKEN
to a string like share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2
. The format is share-token:real-token,share-token:real-token
. You can add as many pairs as you want.
The other way is to use command line argument. You can run ./cogpt-api -share-token share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2
to start the service. You can add as many pairs as you want.
If you set as above, when you make a request with a token that starts with share-
, the service will use the real token mapped to the share token. If you make a request with a token that starts with ghu_
, the service will use the token directly.
Note that share tokens must start with share-
. Maps that don't start with share-
will be ignored.
To generate a random share token, you can download the latest release for your platform. Unzip it and run ./gen-share-token
.
Edit .env
or set environment variables or command line arguments.
Here are the config options and their default values (.env or environment variables):
keys | default | description |
---|---|---|
HOST |
localhost |
Host to listen on |
PORT |
8080 |
Port to listen on |
CACHE |
true |
Whether to cache tokens in sqlite database. If false, tokens will be cached in memory |
CACHE_PATH |
db/cache.sqlite3 |
Path to sqlite database. Only used if CACHE is true |
DEBUG |
false |
Whether to enable debug mode. If true, the service will print debug info |
LOG_LEVEL |
info |
Log level. |
SHARE_TOKEN |
"" |
Maps of share-token and real token. For example, SHARE_TOKEN=share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2 . |
For command line arguments, run ./cogpt-api -h
to see the help message.
Precedence: command line arguments > environment variables > .env.
Environment variables for proxy are also supported. They are ALL_PROXY
, HTTPS_PROXY
and HTTP_PROXY
. You can also use command line arguments to set proxy. Run ./cogpt-api -h
to see the help message. Precedence: command line arguments > environment variables (ALL_PROXY
> HTTPS_PROXY
> HTTP_PROXY
).