This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API to generate chat completions.
In increasing order of complexity, the scripts are:
chat.py
: A simple script that demonstrates how to use the OpenAI API to generate chat completions.chat_stream.py
: Addsstream=True
to the API call to return a generator that streams the completion as it is being generated.chat_history.py
: Adds a back-and-forth chat interface usinginput()
which keeps track of past messages and sends them with each chat completion call.chat_history_stream.py
: The same idea, but withstream=True
enabled.
Plus these scripts to demonstrate additional features:
chat_safety.py
: The simple script with exception handling for Azure AI Content Safety filter errors.chat_async.py
: Uses the async clients to make asynchronous calls, including an example of sending off multiple requests at once usingasyncio.gather
.
If you open this up in a Dev Container or GitHub Codespaces, everything will be setup for you. If not, follow these steps:
-
Set up a Python virtual environment and activate it.
-
Install the required packages:
pip install -r requirements.txt
These scripts can be run against an Azure OpenAI account, an OpenAI.com account, or a local Ollama server, depending on the environment variables you set.
-
Copy the
.env.sample file to a new file called
.env`:cp .env.sample .env
-
For Azure OpenAI, create an Azure OpenAI gpt-3.5 or gpt-4 deployment, and customize the
.env
file with your Azure OpenAI endpoint and deployment id.API_HOST=azure AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com AZURE_OPENAI_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME AZURE_OPENAI_VERSION=2024-03-01-preview
-
For OpenAI.com, customize the
.env
file with your OpenAI API key and desired model name.API_HOST=openai OPENAI_KEY=YOUR-OPENAI-API-KEY OPENAI_MODEL=gpt-3.5-turbo
-
For Ollama, customize the
.env
file with your Ollama endpoint and model name (any model you've pulled).API_HOST=ollama OLLAMA_ENDPOINT=http://localhost:11434/v1 OLLAMA_MODEL=llama2