Possibility to specify custom API endpoint address?
Opened this issue · 4 comments
Could this be implemented?
For example 'localhost:7860/v1' could be one of these custom addresses one could enter as a command line argument for running such tests on local models that are set up with an OpenAI-like endpoint but of course are running off a different address.
Thank you
- Elliott
Hey! Thanks for the note - yep that is definitely doable we just need to add it to the repo. This isn't on our immediate development plans for the maintainers but if you want to throw up a PR we can take a look
I also need that to be working.
Releasing that would be greatly appreciated.
https://github.com/vllm-project/vllm/blob/main/examples/openai_chatcompletion_client.py
This vllm client example might worth to look at how they integrated custom url with openai.
https://github.com/vllm-project/vllm/blob/main/examples/openai_chatcompletion_client.py
This vllm client example might worth to look at how they integrated custom url with openai.
I'm aware of this setting already, unfortunately things are a little more complex as they are using langchain openAI which complicates things a little more than that.