Is it still intended and possible to use custom endpoint in 0.2.0?
Kaschi14 opened this issue · 9 comments
same question. I can use my own custom endpoint in the previous version, but it can not work in the newest 0.2.0 version with the same configuration.
This issue is stale because it has been open for 30 days with no activity.
I have the same question
This issue is stale because it has been open for 30 days with no activity.
Sorry for the late response. What do you mean by custom endpoint? Could you share your configuration? llm-vscode
supports multiple backends, namely Hugging Face, TGI, OpenAI, ollama & llama.cpp
Sorry for the late response. What do you mean by custom endpoint? Could you share your configuration?
llm-vscode
supports multiple backends, namely Hugging Face, TGI, OpenAI, ollama & llama.cpp
@McPatate In the previous version of llm-vscode, I can use my own local deployed model. For example, I deployed my model at the port: http://192.168.1.73:8192/generate, then I can put the address at ModelID or Endpoint to obtain the response. But in the newest version of 0.2.0, it does not work anymore. Great thanks.
Could you share your configuration settings, any logs or anything else that could help understand what is going on?
Have you tried updating to the latest version of llm-vscode?
This issue is stale because it has been open for 30 days with no activity.
I'll close this issue, feel free to re-open if you are still facing an issue.