Vllm multi modal deployment support.
Opened this issue · 0 comments
ashwinnair14 commented
Collab starting point https://colab.research.google.com/drive/1VvNWfLeGOn3np87PGFO2DVAsxu-ipzZc?authuser=1#scrollTo=Ydks7-cR0lbv
Upgrade Vllm to 0.5.6 or above {test with Qwen2-VL }
Add newer params to the base class limit_mm_per_prompt & llm.generate
--> "multi_modal_data": {"image":images}