This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API to generate chat completions.
In increasing order of complexity, the scripts are:
chat.py
: A simple script that demonstrates how to use the OpenAI API to generate chat completions.chat_stream.py
: Addsstream=True
to the API call to return a generator that streams the completion as it is being generated.chat_history.py
: Adds a back-and-forth chat interface usinginput()
which keeps track of past messages and sends them with each chat completion call.chat_history_stream.py
: The same idea, but withstream=True
enabled.
Plus these scripts to demonstrate additional features:
chat_safety.py
: The simple script with exception handling for Azure AI Content Safety filter errors.chat_async.py
: Uses the async clients to make asynchronous calls, including an example of sending off multiple requests at once usingasyncio.gather
.chat_langchain.py
: Uses the langchain SDK to generate chat completions. Learn more from Langchain docschat_llamaindex.py
: Uses the LlamaIndex SDK to generate chat completions. Learn more from LlamaIndex docs
If you open this up in a Dev Container or GitHub Codespaces, everything will be setup for you. If not, follow these steps:
-
Set up a Python virtual environment and activate it.
-
Install the required packages:
python -m pip install -r requirements.txt
These scripts can be run with Azure OpenAI account, OpenAI.com, local Ollama server, or GitHub models, depending on the environment variables you set.
-
Copy the
.env.sample
file to a new file called.env
:cp .env.sample .env
-
For Azure OpenAI, create an Azure OpenAI gpt-3.5 or gpt-4 deployment (perhaps using this template), and customize the
.env
file with your Azure OpenAI endpoint and deployment id.API_HOST=azure AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com AZURE_OPENAI_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME AZURE_OPENAI_VERSION=2024-03-01-preview
If you are not yet logged into the Azure account associated with that deployment, run this command to log in:
az login
-
For OpenAI.com, customize the
.env
file with your OpenAI API key and desired model name.API_HOST=openai OPENAI_KEY=YOUR-OPENAI-API-KEY OPENAI_MODEL=gpt-3.5-turbo
-
For Ollama, customize the
.env
file with your Ollama endpoint and model name (any model you've pulled).API_HOST=ollama OLLAMA_ENDPOINT=http://localhost:11434/v1 OLLAMA_MODEL=llama2
If you're running inside the Dev Container, replace
localhost
withhost.docker.internal
. -
For GitHub models, customize the
.env
file with your GitHub model name.API_HOST=github GITHUB_MODEL=gpt-4o
You'll need a
GITHUB_TOKEN
environment variable that stores a GitHub personal access token. If you're running this inside a GitHub Codespace, the token will be automatically available. If not, generate a new personal access token and run this command to set theGITHUB_TOKEN
environment variable:export GITHUB_TOKEN="<your-github-token-goes-here>"