HyperMink/inferenceable
Scalable AI Inference Server for CPU and GPU with Node.js | Utilizes llama.cpp and parts of llamafile C/C++ core under the hood.
JavaScriptApache-2.0
No issues in this repository yet.
Scalable AI Inference Server for CPU and GPU with Node.js | Utilizes llama.cpp and parts of llamafile C/C++ core under the hood.
JavaScriptApache-2.0
No issues in this repository yet.