vLLM Compatibility
sidjha1 opened this issue · 1 comments
sidjha1 commented
Hello, I was curious whether it was possible to run models locally via vLLM. The README mentions HF TGI for running local models. Looking through the experimental dspy branch it seems that HF TGI is chosen for the model if an OpenAI model is not provided. Should I modify the experimental branch to add vLLM support or is there another way to run local models on vLLM?
KarelDO commented
Ideally DSPy handles all of this, and IReRa just uses whatever LLM you supply. To run with vLLM for now, it is indeed best to change how the models are created in the irera branch on DSPy. I'd need to think of a more scalable way of taking care of model providers long-term.