Important
🚧 Traces with new columns SessionID
and Elapsed time
are under collection now and will be available soon!
This repository contains public releases of a real-world trace dataset of LLM serving workloads for the benefit of the research and academic community.
This LLM serving is powered by Microsoft Azure.
There are currently 4 files in Release v1.1:
-
BurstGPT_1.csv
contains all of our trace in the first 2 months with some failure thatResponse tokens
are0
s. Totally 1429.7k lines. -
BurstGPT_without_fails_1.csv
contains all of our trace in the first 2 months without failure. Totally 1404.3k lines. -
BurstGPT_2.csv
contains all of our trace in the second 2 months with some failure thatResponse tokens
are0
s. Totally 3858.4k lines. -
BurstGPT_without_fails_2.csv
contains all of our trace in the second 2 months without failure. Totally 3784.2k lines.
BurstGPT_1.csv
is also in /data
for you to use.
- You may scale the average Requests Per Second (RPS) in the trace according to your evaluation setups.
- You may also model the patterns in the trace as indicated in our paper and scale the parameters in the models.
- Check our simple request generator demo in
example/
. If you have some specific needs, we are eager to assist you in exploring and leveraging the trace to its fullest potential. Please let us know of any issues or questions by sending email to mailing list.
- We will continue to update the time range of the trace and add the end time of each request.
- We will update the conversation log, including the session IDs, time stamps, etc, in each conversation, for researchers to optimize conversation services.
- We will open-source the full benchmark suite for LLM inference soon.
If the trace is utilized in your research, please ensure to reference our paper:
@misc{wang2024burstgpt,
title={BurstGPT: A Real-world Workload Dataset to Optimize LLM Serving Systems},
author={Yuxin Wang and Yuhan Chen and Zeyu Li and Xueze Kang and Zhenheng Tang and Xin He and Rui Guo and Xin Wang and Qiang Wang and Amelie Chi Zhou and Xiaowen Chu},
year={2024},
eprint={2401.17644},
archivePrefix={arXiv},
primaryClass={id='cs.DC' full_name='Distributed, Parallel, and Cluster Computing' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers fault-tolerance, distributed algorithms, stabilility, parallel computation, and cluster computing. Roughly includes material in ACM Subject Classes C.1.2, C.1.4, C.2.4, D.1.3, D.4.5, D.4.7, E.1.'}
}
- Duration: 121 consecutive days in 4 consecutive months.
- Dataset size: ~5.29M lines, ~188MB.
Timestamp
: request submission time, seconds from0:00:00
on the first day.Model
: called models, includingChatGPT
(GPT-3.5) andGPT-4
.Request tokens
: Request tokens length.Response tokens
: Response tokens length.Total tokens
: Request tokens length plus response tokens length.Log Type
: the way users call the model, in conversation mode or using API, includingConversation log
andAPI log
.