mobiusml/aana_sdk

Vllm multi modal deployment support.

Opened this issue · 0 comments

https://github.com/vllm-project/vllm/blob/0fbc6696c28f41009d8493c57e74f5971d6f5026/vllm/model_executor/models/idefics2_vision_model.py#L105

Collab starting point https://colab.research.google.com/drive/1VvNWfLeGOn3np87PGFO2DVAsxu-ipzZc?authuser=1#scrollTo=Ydks7-cR0lbv

Upgrade Vllm to 0.5.6 or above {test with Qwen2-VL }
Add newer params to the base class limit_mm_per_prompt & llm.generate
--> "multi_modal_data": {"image":images}