/gpt4free-demo

A slow guide on setting up a free OpenAI GPT-4 API on your local machine.

Primary LanguageTypeScript

Welcome to gpt4free-demo 👋

Set up a free OpenAI GPT-4 API on your own

IT Man - Tech #38 - Setting Up Your Own Free GPT-4 API with gpt4free-ts [Vietnamese]

Demo

Demo GIF 1 Demo GIF 2

Usage

Follow these steps to get gpt4free-demo up and running:

  1. Clone the Repository:

    git clone https://github.com/username/gpt4free-demo.git
    cd gpt4free-demo
  2. Set Up Environment Variables: Copy the example environment file and set up your own variables:

    cp .env.example .env

    Open .env with your preferred text editor and fill in your own values for the given variables. Save and close the file when you're finished.

  3. Start the Services: Start your services using Docker Compose:

    docker-compose up -d

    Services Start GIF

    If you change any environment variables in your .env file, restart your services with docker-compose down and docker-compose up -d.

  4. Access the API: Once the services are running, the API will be accessible at:

    • Find supports model and site: http://127.0.0.1:13000/supports [GET]
    • Return when chat is complete: http://127.0.0.1:13000/ask?prompt=***&model=***&site=*** [POST/GET]
    • Return with event stream: http://127.0.0.1:13000/ask/stream?prompt=***&model=***&site=*** [POST/GET]

    More usage examples can be found at xiangsx/gpt4free-ts.

Certainly! If you want to include instructions on how to use hurl to test the API in the README, you can add a new section like this:

Testing with Hurl

Hurl is a command-line tool to run HTTP requests. You can use it to test the endpoints in this API. Here's how you can get started:

  1. Install Hurl: Follow the instructions on the official website to install Hurl on your system.

  2. Create a Hurl File: You can create a file with a .hurl extension to define the HTTP requests you want to test. Here's an example gpt.hurl file for this project:

    # List all supports model
    GET http://127.0.0.1:13000/supports
    
    # Call Vita model
    GET http://127.0.0.1:13000/ask
    [QueryStringParams]
    site: vita
    model: gpt-3.5-turbo
    prompt: Tell me a joke about Software Engineering
    
  3. Run the Hurl File: Use the following command to execute the gpt.hurl file:

    hurl --verbose gpt.hurl

    This will run the defined HTTP requests and print the responses to the terminal.

  4. Read the Documentation: For more advanced usage, you can refer to the samples documentation.

Resources

Reference Videos

  • IT Man - Talk #34
  • IT Man - Tip #36

Author

Show your support

kofi paypal buymeacoffee