Unlimited free GPT-3.5 turbo API service.
About author »
Features
·
Examples
·
Building
·
Reference
·
License
- Streaming API. freegpt35 allow the response sent back incrementally in chunks.
- Easy Deploy. Containerized, starts in seconds using docker compose.
- Login free. Do not need to worry about the details of authorization, use in a glance.
mkdir freegpt35 && cd freegpt35
curl -O https://raw.githubusercontent.com/hominsu/freegpt35/main/deploy/docker-compose.yml
docker compose up -d
Once deployed, use following command to confirm that everything working.
curl -X POST "http://localhost:3000/v1/chat/completions" \
-H "Authorization: Bearer anything" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello"}],
}'
{"id":"chatcmpl-*********","created":9999999999,"model":"gpt-3.5-turbo","object":"chat.completion","choices":[{"finish_reason":"stop","index":0,"message":{"content":"Hi there! How can I assist you today?","role":"assistant"}}],"usage":{"prompt_tokens":1,"completion_tokens":10,"total_tokens":11}}
Here is an nginx conf template that you can refer to. More info about NGINX Docker setup you can check this post: 优雅地在 Docker 中使用 NGINX
upstream freegpt35 {
server 127.0.0.1:3000
}
server {
listen 80;
listen [::]:80;
server_name your.domain.name;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name your.domain.name;
ssl_certificate /etc/nginx/ssl/your.domain.name/full.pem;
ssl_certificate_key /etc/nginx/ssl/your.domain.name/key.pem;
ssl_session_timeout 5m;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers TLS13_AES_128_GCM_SHA256:TLS13_AES_256_GCM_SHA384:TLS13_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers on;
location /v1/chat/completions {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
proxy_pass http://freegpt35;
proxy_buffering off;
proxy_cache off;
send_timeout 600;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
chunked_transfer_encoding on;
}
error_page 500 502 503 504 /50x.html;
}
If you subscribe to Vercel, you can try this deploy method, otherwise do not waste your time, since with Hobby
plan your serverless API routes can only be processed for 5 seconds, the route responds with a FUNCTION_INVOCATION_TIMEOUT
error.
You can also define your Environment Variables to for some specific cases. e.g. NEXT_PUBLIC_BASE_URL
, NEXT_PUBLIC_API_URL
, NEXT_PUBLIC_MAX_RETRIES
, NEXT_PUBLIC_USER_AGENT
.
Once deployed, you can test with curl again
curl -X POST "https://freegpt35.vercel.app/v1/chat/completions" \
-H "Authorization: Bearer anything" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello"}]
}'
{"id":"chatcmpl-**********","created":9999999999,"model":"gpt-3.5-turbo","object":"chat.completion","choices":[{"finish_reason":"stop","index":0,"message":{"content":"Hey there! How's it going?","role":"assistant"}}],"usage":{"prompt_tokens":1,"completion_tokens":8,"total_tokens":9}}
If your country/region can not access ChatGPT, you might need a proxy. In this case, you need to build your own docker image (Next.JS replace the env in build stage).
You can specify your platform (amd64 | arm64
).
NEXT_PUBLIC_BASE_URL="https://chat.openai.com" \
NEXT_PUBLIC_API_URL="/backend-anon/conversation" \
NEXT_PUBLIC_MAX_RETRIES="5" \
NEXT_PUBLIC_USER_AGENT="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" \
NEXT_PUBLIC_PROXY_ENABLE=true \
NEXT_PUBLIC_PROXY_PROTOCOL=http \
NEXT_PUBLIC_PROXY_HOST="127.0.0.1" \
NEXT_PUBLIC_PROXY_PORT="7890" \
NEXT_PUBLIC_PROXY_AUTH="false" \
NEXT_PUBLIC_PROXY_USERNAME="" \
NEXT_PUBLIC_PROXY_PASSWORD="" \
docker buildx bake --file deploy/docker-bake.hcl --load --set "*.platform=linux/amd64"
Distributed under the AGPL 3.0 license. See LICENSE
for more information.