Please be compatible with llama.cpp
DDXDB opened this issue · 3 comments
DDXDB commented
When I use the OpenAI Base Url to link to llama.cpp
{"tid":"6312","timestamp":1713860426,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"127.0.0.1","remote_port":5213,"status":200,"method":"OPTIONS","path":"/v1/moderations","params":{}}
{"tid":"6312","timestamp":1713860426,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"127.0.0.1","remote_port":5213,"status":404,"method":"POST","path":"/v1/moderations","params":{}}
It seems to be moderations, but I confirm that my llama.cpp server is fine and other programs can communicate with it through the openAI API.
Cerlancism commented
You can disable moderator
CLI option
--no-use-moderator
DDXDB commented
You can disable moderator
CLI option
--no-use-moderator
Can it be implemented on the web? I prefer to use it on the web.
Cerlancism commented