This proxy API has been designed to add data persistency and chat separation to the ChatGPT API. It provides a layer of abstraction between the users and the ChatGPT API, allowing for easy storage and retrieval of chat sessions.
- Data persistency: stores chat sessions in a database for easy retrieval.
- Chat separation: separates chats by session, making it easy to retrieve previous chats.
- Easy deployment: designed to run on Google Cloud Functions with a deploy.sh script.
- Google Cloud account
- And a project with the Cloud Functions API enabled
- UpStash account (any Redis provider will work, but this project use UpStash since it has a nice free tier)
- A ChatGPT API key (you can get one here)
- Python 3.9 or higher (this is the runtime, you don't need to install Python on your machine unless you want to run the code locally)
Before deploying the proxy API, you'll need to configure the following environment variables:
GPT_API_KEY
: your ChatGPT API keyREDIS_HOST
: the database number of your UpStash Redis instanceREDIS_PORT
: the port of your UpStash Redis instanceREDIS_PASSWORD
: the password of your UpStash Redis instance
You can configure these environment variables by copying the .env.example
file to .env
and filling in the values. There are also some optional environment variables you can configure, check the .env.example
file for more information.
Note: If you have python installed on your machine your can quickly run the function locally using the
functions-framework
package. To do this, runpip install functions-framework
and then runexport $(cat .env | xargs); functions-framework --target=main
in the root directory of the project. This will start a local server (with all the env vars loaded) on port 8080 which you can use to test the function. This is not recommended for production use.
You can deploy this proxy API to Google Cloud Functions using the gcloud CLI tool. Here are the steps to deploy the function:
- Install the gcloud CLI tool on your machine if not already installed.
- Navigate to the root directory of the project in your terminal.
- Use the
gcloud init
command to initialize gcloud and set up your project configuration. - Use the
gcloud functions deploy
command to deploy the function to Google Cloud Functions.
gcloud function deploy <your-function-name> \
--runtime python39 \
--region=us-central1 \
--trigger-http \
--project <name-of-your-gcp-project> \
--source . \
--entry-point main \
--allow-unauthenticated
This will deploy your function with the HTTP trigger. --allow-unauthenticated
allows unauthenticated access to your function endpoint.
Once the function is deployed, you'll receive a URL which you can use to send requests to and retrieve chat logs for your application.
To start using the proxy API, you can make POST requests to the following endpoint:
https://[YOUR_CLOUD_FUNCTION_URL]/<chat_id>/chat
Where `<chat_id>` is the ID of the chat session you want to retrieve. If the chat session doesn't exist, it will be created automatically, but the chat_id value is up to you.
GET
/
(Check service availability)
None
http code content-type response 200
text/html
Hello, World!
curl -X GET http://localhost:8080/
GET
/system
(Get current global system prompt)
None
http code content-type response 200
application/json
{"content": "You are a python engineer..."}
curl -X GET http://localhost:8080/system
POST
/system
(Set the global system prompt, base for all chats system prompt)
name type data type description content required string A message telling how ChatGPT should behave. Check suggestions
http code content-type response 200
application/json
{}
curl -X POST -H "Content-Type: application/json" --data @system.json http://localhost:8080/system
OpenAI provides moderation checks to users messages, which can help detect unwanted behavior on a conversation.
POST
/mod
(Send user's messages to this endpoint before calling /chat)
name type data type description content required string The message to evaluate possible moderation flags
http code content-type response 200
application/json
{"id": "string", "model": "string", "results": [{...}]}
curl -X POST -H "Content-Type: application/json" --data @message.json http://localhost:8080/mod
POST
/<chat_id>/system
(Same as global system prompt but for individual chat)
curl -X POST -H "Content-Type: application/json" --data @system.json http://localhost:8080/<chat_id>/system
GET
/<chat_id>/system
(Same as global system prompt but for individual chat)
curl -X GET http://localhost:8080/<chat_id>/system
POST
/<chat_id>/chat
(Send a message to a chat)
name type data type description content required string User message to send to ChatGPT and get the response
http code content-type response 200
application/json
{"content": "string", "role": "string", "tokens": {"completion_tokens": number, "prompt_tokens": number, "total_tokens": number}}
curl -X POST -H "Content-Type: application/json" --data @message.json http://localhost:8080/<chat_id>/chat
GET
/<chat_id>/chat
(Get list of all messages in this chat)
None
http code content-type response 200
application/json
[{"content": "string", "role": "string"}]
curl -X GET http://localhost:8080/<chat_id>/chat
POST
/<chat_id>/img
(Send a image url to a chat and get the image description)
name type data type description content required string A prompt about the image like: "What's on this image?" url required string The url of the image
http code content-type response 200
application/json
{"content": "string", "role": "string", "tokens": {"completion_tokens": number, "prompt_tokens": number, "total_tokens": number}}
curl -X POST -H "Content-Type: application/json" --data @image.json http://localhost:8080/<chat_id>/img
POST
/<chat_id>/clear
(Clear all messages from a chat)
None
http code content-type response 200
application/json
{}
curl -X POST http://localhost:8080/<chat_id>/clear
This repository is licensed under the MIT license. See LICENSE
for more information.