/azure-openai-benchmark

Azure OpenAI benchmarking tool

Primary LanguagePythonMIT LicenseMIT

Azure OpenAI benchmarking tool

⚠️ Code in this repo is written for testing purposes and should not be used in production

The Azure OpenAI Benchmarking tool is designed to aid customers in benchmarking their provisioned-throughput deployments. Provisioned throughput deployments provide a set amount of model compute. But determining the exact performance for you application is dependent on several variables such as: prompt size, generation size and call rate.

The benchmarking tool provides a simple way to run test traffic on your deploymnet and validate the throughput for your traffic workloads. The script will output key performance statistics including the average and 95th percentile latencies and utilization of the deployment.

You can use this tool to experiment with total throughput at 100% utilization across different traffic patterns for a Provisioned-Managed deployment type. These tests allow you to better optimize your solution design by adjusting the prompt size, generation size and PTUs deployed

Setup

Pre-requisites

  1. An Azure OpenAI Service resource with a model model deployed with a provisioned deployment (either Provisioned or Provisioned-Managed) deplyment type. For more information, see the resource deployment guide.
  2. Your resource endpoint and access key. The script assumes the key is stored in the following environment variable: OPENAI_API_KEY. For more information on finding your endpoint and key, see the Azure OpenAI Quickstart.

Building and running

In an existing python environment:

$ pip install -r requirements.txt
$ python -m benchmark.bench load --help

Build a docker container:

$ docker build -t azure-openai-benchmarking .
$ docker run azure-openai-benchmarking load --help

General Guidelines

Consider the following guidelines when creating your benchmark tests

  1. Ensure call characteristics match your production expectations. The number of calls per minute and total tokens you are able to process varies depending on the prompt size, generation size and call rate.
  2. Run your test long enough to reach a stable state. Throttling is based on the total compute you have deployed and are utilizing. The utilization includes active calls. As a result you will see a higher call rate when ramping up on an unloaded deployment because there are no existing active calls being processed. Once your deplyoment is fully loaded with a utilzation near 100%, throttling will increase as calls can only be processed as earlier ones are completed. To ensure an accurate measure, set the duration long enough for the throughput to stabilize, especialy when running at or close to 100% utilization.

Usage examples

Common Scenarios:

The table below provides an example prompt & generation size we have seen with some customers. Actual sizes will vary significantly based on your overall architecture For example,the amount of data grounding you pull into the prompt as part of a chat session can increase the prompt size significantly.

Scenario Prompt Size Completion Size Calls per minute Provisioned throughput units (PTU) required
Chat 1000 200 45 200
Summarization 7000 150 7 100
Classification 7000 1 24 300

Or see the pre-configured shape-profiles below.

Run samples

During a run, statistics are periodically output every 60s to stdout while logs are output to stderr. Some metrics may not show up immediately due to lack of data.

Run load test at 60 RPM with exponential retry back-off

$ python -m benchmark.bench load \
    --deployment gpt-4 \
    --rate 60 \
    --retry exponential \
    https://myaccount.openai.azure.com

2023-10-19 18:21:06 INFO     using shape profile balanced: context tokens: 500, max tokens: 500
2023-10-19 18:21:06 INFO     warming up prompt cache
2023-10-19 18:21:06 INFO     starting load...
2023-10-19 18:21:06 rpm: 1.0   requests: 1     failures: 0    throttled: 0    ctx tpm: 501.0  gen tpm: 103.0  ttft avg: 0.736  ttft 95th: n/a    tbt avg: 0.088  tbt 95th: n/a    e2e avg: 1.845  e2e 95th: n/a    util avg: 0.0%   util 95th: n/a   
2023-10-19 18:21:07 rpm: 5.0   requests: 5     failures: 0    throttled: 0    ctx tpm: 2505.0 gen tpm: 515.0  ttft avg: 0.937  ttft 95th: 1.321  tbt avg: 0.042  tbt 95th: 0.043  e2e avg: 1.223 e2e 95th: 1.658 util avg: 0.8%   util 95th: 1.6%  
2023-10-19 18:21:08 rpm: 8.0   requests: 8     failures: 0    throttled: 0    ctx tpm: 4008.0 gen tpm: 824.0  ttft avg: 0.913  ttft 95th: 1.304  tbt avg: 0.042  tbt 95th: 0.043  e2e avg: 1.241 e2e 95th: 1.663 util avg: 1.3%   util 95th: 2.6% 

Load test with custom request shape

$ python -m benchmark.bench load \
    --deployment gpt-4 \
    --rate 1 \
    --shape custom \
    --context-tokens 1000 \
    --max-tokens 500 \
    https://myaccount.openai.azure.com

Obtain number of tokens for input context

tokenize subcommand can be used to count number of tokens for a given input. It supports both text and json chat messages input.

$ python -m benchmark.bench tokenize \
    --model gpt-4 \
    "this is my context"
tokens: 4

Alternatively you can send your text via stdin:

$ cat mychatcontext.json | python -m benchmark.bench tokenize \
    --model gpt-4
tokens: 65

Configuration Option Details

Shape profiles

The tool generates synthetic requests using random words according to the number of context tokens in the shape profile requested. In addition, to avoid any engine optimizations, each prompt is prefixed with a random prefix to force engine to run a full request processing for each request without any optimization. This ensures that the results observed while running the tool are the worst case scenario for given traffic shape.

The tool supports four different shape profiles via command line option --shape-profile:

profile description context tokens max tokens
balanced [default] Balanced count of context and generation tokens. Should be representative of typical workloads. 500 500
context Represents workloads with larger context sizes compared to generation. For example, chat assistants. 2000 200
generation Represents workloads with larger generation and smaller contexts. For example, question answering. 500 1000
custom Allows specifying custom values for context size (--context-tokens) and max generation tokens (--max-tokens).

Output fields

field description sliding window example
time Time offset in seconds since the start of the test. no 120
rpm Successful Requests Per Minute. Note that it may be less than --rate as it counts completed requests. yes 12
processing Total number of requests currently being processed by the endpoint. no 100
completed Total number of completed requests. no 100
failures Total number of failed requests out of requests. no 100
throttled Total number of throttled requests out of requests. no 100
requests Deprecated in favor of completed field (output values of both fields are the same) no 1233
ctx_tpm Number of context Tokens Per Minute. yes 1200
gen_tpm Number of generated Tokens Per Minute. yes 156
ttft_avg Average time in seconds from the beginning of the request until the first token was received. yes 0.122
ttft_95th 95th percentile of time in seconds from the beginning of the request until the first token was received. yes 0.130
tbt_avg Average time in seconds between two consequitive generated tokens. yes 0.018
tbt_95th 95th percentail of time in seconds between two consequitive generated tokens. yes 0.021
e2e_avg Average end to end request time. yes 1.2
e2e_95th 95th percentile of end to end request time. yes 1.5
util_avg Average deployment utilization percentage as reported by the service. yes 89.3%
util_95th 95th percentile of deployment utilization percentage as reported by the service. yes 91.2%

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.