mit-han-lab/qserve

Expected speed for llama3-70b-instruct

ethxnp opened this issue · 1 comments

Hello - I quantized Llama3-70B-Instruct with g128 (model here), and ran the benchmarking script on an L40s with the below command:

> export GLOBAL_BATCH_SIZE=4 NUM_GPU_PAGE_BLOCKS=100
> python qserve_benchmark.py --model $MODEL_PATH --benchmarking --precision w4a8kv4 --group-size 128

I get ~60 tokens/s, is this the expected throughput? I was hoping for a bit closer to llama2-70b at ~280 tokens/s.

Hi @ethxnp , thank you very much for your interests in QServe!

Yes. The expected throughput should be close to 280 tokens/sec. It might be slightly smaller since Llama3 models have a much larger vocabulary size.

The reason why you get ~60 tokens/s is that you have set the batch size to 4. To maximize the throughput, you'll need to take full advantage of the device's capacity. For example, on L40s, the max batch size for 70B models should be close to 24.