runpod-workers/worker-vllm

Issue: Update VLLM to Version .5.0++, and a few suggestions

Opened this issue ยท 13 comments

Description

  1. ๐ŸŒŸ Upgrade VLLM: We need to rocket VLLM to version 0.5.0++ or beyond! ๐Ÿš€
  2. ๐Ÿค– Tensorize Awesomeness: The tensorize feature is like giving VLLM a turbo boost. ๐ŸŽ๏ธ Check out the Tensorize VLLM example for a sneak peek.
    • ๐Ÿš€ It lets us load the model during download (but remember, the model needs a little conversion magic).
  3. ๐Ÿ“ฆ Pip It Up: Why build VLLM from scratch when we can summon it with a pip package? Efficiency, my friend! ๐Ÿง™โ€โ™‚๏ธ

Kudos to the stellar maintainer! ๐ŸŒŸ๐Ÿ™Œ

+1! I really would like to run Phi3VForCausalLM


+1!

+1, Gemma 2 support has been recently rolled out in vLLM!

+1, it would make much more sense to pip install vllm so that when a new model is released and implemented in vLLM it is automatically integrated in this worker @alpayariyak

Are there any plans to upgrade the VLLM version and if so, can you provide a date?

+1, then we could finally run DeepSeek-Coder v2

+1

Llama 3.1 needs 0.5.3 https://github.com/vllm-project/vllm/releases/tag/v0.5.3

Can we upgrade this worker to support this out of box in runpod serverless vllm ?

waiting also for the update :) let me know if i can help !

Hi all, thank you so much for the suggestions! I've joined a different company, so @pandyamarut will be taking over. It's been a great pleasure serving you all!

I wish you an amazing next work experience ;) welcome aboard @pandyamarut !

Working on it ,Sorry for the delay. Thanks for maintaining the repo @alpayariyak

Guys, do we know anything about the approximate time frame for the update? So that you can somehow plan the update of the models in the roadmap. Thanks

image
Pls support new quantization fp8, refer to this docs:
vllm docs

I've got a whole new menu with a bunch of new options i guess its all of the arguments thats very great thank you for the update staffs and maintainers! just the options value needs to be updated :)