/vllm

vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs

Primary LanguagePythonApache License 2.0Apache-2.0

Watchers