/gpt4free

decentralising the Ai Industry, just some language model api's...

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

248433934-7886223b-c1d1-4260-82aa-da5741f303bb Written by @xtekky & maintained by @hlohaus

Buy Me a Coffee at ko-fi.com

By using this repository or any code related to it, you agree to the legal notice. The author is not responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.

Note

Lastet version: PyPI version Docker version

pip install -U g4f
docker pull hlohaus789/g4f

πŸ†• What's New

πŸ“š Table of Contents

πŸ› οΈ Getting Started

Docker container

Quick start:
  1. Download and install Docker
  2. Pull lastet image and run the container:
docker pull hlohaus789/g4f
docker run -p 8080:80 -p 1337:1337 -p 7900:7900 --shm-size="2g" hlohaus789/g4f:latest
  1. Open the included client on: http://localhost:8080/chat/ or set the api base in your client to: http://localhost:1337/v1
  2. (Optional) If you need to log in to a provider, you can view the desktop from the container here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret.

Use python package

Prerequisites:
  1. Download and install Python (Version 3.10+ is recommended).
  2. Install Google Chrome for providers with webdriver
Install using pypi:
pip install -U g4f
or:
  1. Clone the GitHub repository:
git clone https://github.com/xtekky/gpt4free.git
  1. Navigate to the project directory:
cd gpt4free
  1. (Recommended) Create a Python virtual environment: You can follow the Python official documentation for virtual environments.
python3 -m venv venv
  1. Activate the virtual environment:
    • On Windows:
    .\venv\Scripts\activate
    
    • On macOS and Linux:
    source venv/bin/activate
    
  2. Install the required Python packages from requirements.txt:
pip install -r requirements.txt
  1. Create a test.py file in the root folder and start using the repo, further Instructions are below
import g4f
...

Docker for Developers

If you have Docker installed, you can easily set up and run the project without manually installing dependencies.

  1. First, ensure you have both Docker and Docker Compose installed.

  2. Clone the GitHub repo:

git clone https://github.com/xtekky/gpt4free.git
  1. Navigate to the project directory:
cd gpt4free
  1. Build the Docker image:
docker pull selenium/node-chrome
docker-compose build
  1. Start the service using Docker Compose:
docker-compose up

Your server will now be running at http://localhost:1337. You can interact with the API or run your tests as you would normally.

To stop the Docker containers, simply run:

docker-compose down

Note

When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the docker-compose.yml file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using docker-compose build.

πŸ’‘ Usage

The g4f Package

ChatCompletion

import g4f

g4f.debug.logging = True  # Enable debug logging
g4f.debug.check_version = False  # Disable automatic version checking
print(g4f.Provider.Bing.params)  # Print supported args for Bing

# Using automatic a provider for the given model
## Streamed completion
response = g4f.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello"}],
    stream=True,
)
for message in response:
    print(message, flush=True, end='')

## Normal response
response = g4f.ChatCompletion.create(
    model=g4f.models.gpt_4,
    messages=[{"role": "user", "content": "Hello"}],
)  # Alternative model setting

print(response)
Completion
import g4f

allowed_models = [
    'code-davinci-002',
    'text-ada-001',
    'text-babbage-001',
    'text-curie-001',
    'text-davinci-002',
    'text-davinci-003'
]

response = g4f.Completion.create(
    model='text-davinci-003',
    prompt='say this is a test'
)

print(response)
Providers
import g4f

# Print all available providers
print([
    provider.__name__
    for provider in g4f.Provider.__providers__
    if provider.working
])

# Execute with a specific provider
response = g4f.ChatCompletion.create(
    model="gpt-3.5-turbo",
    provider=g4f.Provider.Aichat,
    messages=[{"role": "user", "content": "Hello"}],
    stream=True,
)
for message in response:
    print(message)
Using Browser

Some providers using a browser to bypass the bot protection. They using the selenium webdriver to control the browser. The browser settings and the login data are saved in a custom directory. If the headless mode is enabled, the browser windows are loaded invisibly. For performance reasons, it is recommended to reuse the browser instances and close them yourself at the end:

import g4f
from undetected_chromedriver import Chrome, ChromeOptions
from g4f.Provider import (
    Bard,
    Poe,
    AItianhuSpace,
    MyShell,
    PerplexityAi,
)

options = ChromeOptions()
options.add_argument("--incognito");
webdriver = Chrome(options=options, headless=True)
for idx in range(10):
    response = g4f.ChatCompletion.create(
        model=g4f.models.default,
        provider=g4f.Provider.MyShell,
        messages=[{"role": "user", "content": "Suggest me a name."}],
        webdriver=webdriver
    )
    print(f"{idx}:", response)
webdriver.quit()
Async Support

To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution.

import g4f
import asyncio

_providers = [
    g4f.Provider.Aichat,
    g4f.Provider.ChatBase,
    g4f.Provider.Bing,
    g4f.Provider.GptGo,
    g4f.Provider.You,
    g4f.Provider.Yqcloud,
]

async def run_provider(provider: g4f.Provider.BaseProvider):
    try:
        response = await g4f.ChatCompletion.create_async(
            model=g4f.models.default,
            messages=[{"role": "user", "content": "Hello"}],
            provider=provider,
        )
        print(f"{provider.__name__}:", response)
    except Exception as e:
        print(f"{provider.__name__}:", e)
        
async def run_all():
    calls = [
        run_provider(provider) for provider in _providers
    ]
    await asyncio.gather(*calls)

asyncio.run(run_all())
Proxy and Timeout Support

All providers support specifying a proxy and increasing timeout in the create functions.

import g4f

response = g4f.ChatCompletion.create(
    model=g4f.models.default,
    messages=[{"role": "user", "content": "Hello"}],
    proxy="http://host:port",
    # or socks5://user:pass@host:port
    timeout=120,  # in secs
)

print(f"Result:", response)

You can also set a proxy globally via an environment variable:

export G4F_PROXY="http://host:port"

Interference openai-proxy API (Use with openai python package)

Run interference API from PyPi package

from g4f.api import run_api

run_api()

Run interference API from repo

If you want to use the embedding function, you need to get a Hugging Face token. You can get one at Hugging Face Tokens. Make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key.

Run server:

g4f api

or

python -m g4f.api.run
import openai

# Set your Hugging Face token as the API key if you use embeddings
# If you don't use embeddings, leave it empty
openai.api_key = "YOUR_HUGGING_FACE_TOKEN"  # Replace with your actual token

# Set the API base URL if needed, e.g., for a local development environment
openai.api_base = "http://localhost:1337/v1"

def main():
    chat_completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "write a poem about a tree"}],
        stream=True,
    )

    if isinstance(chat_completion, dict):
        # Not streaming
        print(chat_completion.choices[0].message.content)
    else:
        # Streaming
        for token in chat_completion:
            content = token["choices"][0]["delta"].get("content")
            if content is not None:
                print(content, end="", flush=True)

if __name__ == "__main__":
    main()

πŸš€ Providers and Models

GPT-4

Website Provider GPT-3.5 GPT-4 Stream Status Auth
bing.com g4f.Provider.Bing ❌ βœ”οΈ βœ”οΈ Active ❌
chat.geekgpt.org g4f.Provider.GeekGpt βœ”οΈ βœ”οΈ βœ”οΈ Unknown ❌
gptchatly.com g4f.Provider.GptChatly βœ”οΈ βœ”οΈ ❌ Unknown ❌
liaobots.site g4f.Provider.Liaobots βœ”οΈ βœ”οΈ βœ”οΈ Unknown ❌
www.phind.com g4f.Provider.Phind ❌ βœ”οΈ βœ”οΈ Unknown ❌
raycast.com g4f.Provider.Raycast βœ”οΈ βœ”οΈ βœ”οΈ Unknown βœ”οΈ

GPT-3.5

Website Provider GPT-3.5 GPT-4 Stream Status Auth
www.aitianhu.com g4f.Provider.AItianhu βœ”οΈ ❌ βœ”οΈ Unknown ❌
chat3.aiyunos.top g4f.Provider.AItianhuSpace βœ”οΈ ❌ βœ”οΈ Unknown ❌
e.aiask.me g4f.Provider.AiAsk βœ”οΈ ❌ βœ”οΈ Unknown ❌
chat-gpt.org g4f.Provider.Aichat βœ”οΈ ❌ ❌ Unknown ❌
www.chatbase.co g4f.Provider.ChatBase βœ”οΈ ❌ βœ”οΈ Active ❌
chatforai.store g4f.Provider.ChatForAi βœ”οΈ ❌ βœ”οΈ Unknown ❌
chatgpt.ai g4f.Provider.ChatgptAi βœ”οΈ ❌ βœ”οΈ Active ❌
chatgptx.de g4f.Provider.ChatgptX βœ”οΈ ❌ βœ”οΈ Unknown ❌
chat-shared2.zhile.io g4f.Provider.FakeGpt βœ”οΈ ❌ βœ”οΈ Active ❌
freegpts1.aifree.site g4f.Provider.FreeGpt βœ”οΈ ❌ βœ”οΈ Active ❌
gptalk.net g4f.Provider.GPTalk βœ”οΈ ❌ βœ”οΈ Active ❌
ai18.gptforlove.com g4f.Provider.GptForLove βœ”οΈ ❌ βœ”οΈ Active ❌
gptgo.ai g4f.Provider.GptGo βœ”οΈ ❌ βœ”οΈ Active ❌
hashnode.com g4f.Provider.Hashnode βœ”οΈ ❌ βœ”οΈ Active ❌
app.myshell.ai g4f.Provider.MyShell βœ”οΈ ❌ βœ”οΈ Unknown ❌
noowai.com g4f.Provider.NoowAi βœ”οΈ ❌ βœ”οΈ Unknown ❌
chat.openai.com g4f.Provider.OpenaiChat βœ”οΈ ❌ βœ”οΈ Unknown βœ”οΈ
theb.ai g4f.Provider.Theb βœ”οΈ ❌ βœ”οΈ Unknown βœ”οΈ
sdk.vercel.ai g4f.Provider.Vercel βœ”οΈ ❌ βœ”οΈ Unknown ❌
you.com g4f.Provider.You βœ”οΈ ❌ βœ”οΈ Active ❌
chat9.yqcloud.top g4f.Provider.Yqcloud βœ”οΈ ❌ βœ”οΈ Unknown ❌
chat.acytoo.com g4f.Provider.Acytoo βœ”οΈ ❌ βœ”οΈ Inactive ❌
aibn.cc g4f.Provider.Aibn βœ”οΈ ❌ βœ”οΈ Inactive ❌
ai.ls g4f.Provider.Ails βœ”οΈ ❌ βœ”οΈ Inactive ❌
chatgpt4online.org g4f.Provider.Chatgpt4Online βœ”οΈ ❌ βœ”οΈ Inactive ❌
chat.chatgptdemo.net g4f.Provider.ChatgptDemo βœ”οΈ ❌ βœ”οΈ Inactive ❌
chatgptduo.com g4f.Provider.ChatgptDuo βœ”οΈ ❌ ❌ Inactive ❌
chatgptfree.ai g4f.Provider.ChatgptFree βœ”οΈ ❌ ❌ Inactive ❌
chatgptlogin.ai g4f.Provider.ChatgptLogin βœ”οΈ ❌ βœ”οΈ Inactive ❌
cromicle.top g4f.Provider.Cromicle βœ”οΈ ❌ βœ”οΈ Inactive ❌
gptgod.site g4f.Provider.GptGod βœ”οΈ ❌ βœ”οΈ Inactive ❌
opchatgpts.net g4f.Provider.Opchatgpts βœ”οΈ ❌ βœ”οΈ Inactive ❌
chat.ylokh.xyz g4f.Provider.Ylokh βœ”οΈ ❌ βœ”οΈ Inactive ❌

Other

Website Provider GPT-3.5 GPT-4 Stream Status Auth
bard.google.com g4f.Provider.Bard ❌ ❌ ❌ Unknown βœ”οΈ
deepinfra.com g4f.Provider.DeepInfra ❌ ❌ βœ”οΈ Active ❌
huggingface.co g4f.Provider.HuggingChat ❌ ❌ βœ”οΈ Active βœ”οΈ
www.llama2.ai g4f.Provider.Llama2 ❌ ❌ βœ”οΈ Unknown ❌
open-assistant.io g4f.Provider.OpenAssistant ❌ ❌ βœ”οΈ Inactive βœ”οΈ

Models

Model Base Provider Provider Website
palm Google g4f.Provider.Bard bard.google.com
h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 Hugging Face g4f.Provider.H2o www.h2o.ai
h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 Hugging Face g4f.Provider.H2o www.h2o.ai
h2ogpt-gm-oasst1-en-2048-open-llama-13b Hugging Face g4f.Provider.H2o www.h2o.ai
claude-instant-v1 Anthropic g4f.Provider.Vercel sdk.vercel.ai
claude-v1 Anthropic g4f.Provider.Vercel sdk.vercel.ai
claude-v2 Anthropic g4f.Provider.Vercel sdk.vercel.ai
command-light-nightly Cohere g4f.Provider.Vercel sdk.vercel.ai
command-nightly Cohere g4f.Provider.Vercel sdk.vercel.ai
gpt-neox-20b Hugging Face g4f.Provider.Vercel sdk.vercel.ai
oasst-sft-1-pythia-12b Hugging Face g4f.Provider.Vercel sdk.vercel.ai
oasst-sft-4-pythia-12b-epoch-3.5 Hugging Face g4f.Provider.Vercel sdk.vercel.ai
santacoder Hugging Face g4f.Provider.Vercel sdk.vercel.ai
bloom Hugging Face g4f.Provider.Vercel sdk.vercel.ai
flan-t5-xxl Hugging Face g4f.Provider.Vercel sdk.vercel.ai
code-davinci-002 OpenAI g4f.Provider.Vercel sdk.vercel.ai
gpt-3.5-turbo-16k OpenAI g4f.Provider.Vercel sdk.vercel.ai
gpt-3.5-turbo-16k-0613 OpenAI g4f.Provider.Vercel sdk.vercel.ai
gpt-4-0613 OpenAI g4f.Provider.Vercel sdk.vercel.ai
text-ada-001 OpenAI g4f.Provider.Vercel sdk.vercel.ai
text-babbage-001 OpenAI g4f.Provider.Vercel sdk.vercel.ai
text-curie-001 OpenAI g4f.Provider.Vercel sdk.vercel.ai
text-davinci-002 OpenAI g4f.Provider.Vercel sdk.vercel.ai
text-davinci-003 OpenAI g4f.Provider.Vercel sdk.vercel.ai
llama13b-v2-chat Replicate g4f.Provider.Vercel sdk.vercel.ai
llama7b-v2-chat Replicate g4f.Provider.Vercel sdk.vercel.ai

πŸ”— Related GPT4Free Projects

🎁 Projects ⭐ Stars πŸ“š Forks πŸ›Ž Issues πŸ“¬ Pull requests
gpt4free Stars Forks Issues Pull Requests
gpt4free-ts Stars Forks Issues Pull Requests
Free AI API's & Potential Providers List Stars Forks Issues Pull Requests
ChatGPT-Clone Stars Forks Issues Pull Requests
ChatGpt Discord Bot Stars Forks Issues Pull Requests
Nyx-Bot (Discord) Stars Forks Issues Pull Requests
LangChain gpt4free Stars Forks Issues Pull Requests
ChatGpt Telegram Bot Stars Forks Issues Pull Requests
ChatGpt Line Bot Stars Forks Issues Pull Requests
Action Translate Readme Stars Forks Issues Pull Requests
Langchain Document GPT Stars Forks Issues Pull Requests

🀝 Contribute

Create Provider with AI Tool

Call in your terminal the create_provider.py script:

python etc/tool/create_provider.py
  1. Enter your name for the new provider.
  2. Copy and paste the cURL command from your browser developer tools.
  3. Let the AI ​​create the provider for you.
  4. Customize the provider according to your needs.

Create Provider

  1. Check out the current list of potential providers, or find your own provider source!
  2. Create a new file in g4f/Provider with the name of the Provider
  3. Implement a class that extends BaseProvider.
from __future__ import annotations

from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider

class HogeService(AsyncGeneratorProvider):
    url                   = "https://chat-gpt.com"
    working               = True
    supports_gpt_35_turbo = True

    @classmethod
    async def create_async_generator(
        cls,
        model: str,
        messages: Messages,
        proxy: str = None,
        **kwargs
    ) -> AsyncResult:
        yield ""
  1. Here, you can adjust the settings, for example, if the website does support streaming, set supports_stream to True...
  2. Write code to request the provider in create_async_generator and yield the response, even if it's a one-time response, do not hesitate to look at other providers for inspiration
  3. Add the Provider Name in g4f/Provider/__init__.py
from .HogeService import HogeService

__all__ = [
  HogeService,
]
  1. You are done !, test the provider by calling it:
import g4f

response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
                                    messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)

for message in response:
    print(message, flush=True, end='')

πŸ™Œ Contributors

A list of the contributors is available here
The Vercel.py file contains code from vercel-llm-api by @ading2210, which is licensed under the GNU GPL v3
Top 1 Contributor: @hlohaus

©️ Copyright

This program is licensed under the GNU GPL v3

xtekky/gpt4free: Copyright (C) 2023 xtekky

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.

⭐ Star History

Star History Chart

πŸ“„ License


This project is licensed under GNU_GPL_v3.0.

(πŸ”Ό Back to top)