Question: llama.cpp server support
everyfin-in opened this issue · 4 comments
everyfin-in commented
Prerequisites
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new useful question to share that cannot be answered within Discussions.
Background Description
Hi
I am trying to load this using Llama.CPP HTTP server. Is this supported please? And if yes, please could you give me an example of how to do this?
thanks
Possible Answer
No response
tc-mb commented
Prerequisites
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new useful question to share that cannot be answered within Discussions.
Background Description
Hi
I am trying to load this using Llama.CPP HTTP server. Is this supported please? And if yes, please could you give me an example of how to do this?
thanks
Possible Answer
No response
It should not be supported yet. I have not modified this call.
SunYabin0329 commented
Is there any plan to support it?
fatihmtlm commented
I am also trying to run this model using llama-server.exe but couldn't managed to. So do I have to use transformers ? I also couldn't manage to build ollama for windows (It kinda seemed hard so haven't tried much)
tc-mb commented