keldenl/gpt-llama.cpp
A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
JavaScriptMIT
Pinned issues
Issues
- 3
llama.cpp unresponsive for 20 seconds
#62 opened by JasonS05 - 0
How to create a single binary
#65 opened by arita37 - 1
Module not found: Package path ./lite/tiktoken_bg.wasm?module is not exported from package
#63 opened by Jaykef - 0
node:events:491 throw er; // Unhandled 'error' event Error: spawn YOUR_KEY=../llama.cpp/main ENOENT
#64 opened by Jaykef - 69
Add support for Auto-GPT
#2 opened by keldenl - 1
gguf supported?
#61 opened by hiqsociety - 1
Change listening ip to public ip?
#60 opened by Dougie777 - 1
llama.cpp GPU support
#46 opened by alexl83 - 0
"Internal Server Error" on a remote server
#58 opened by brinrbc - 1
Every Other Chat Response
#56 opened by msj121 - 0
Finding last messages?
#57 opened by msj121 - 0
Why is a default chat being forced?
#54 opened by msj121 - 0
Bearer Token vs Model parameter?
#53 opened by msj121 - 3
Slow speed Vicuna - 7B Help plz
#45 opened by C0deXG - 1
Cannot POST /V1/embeddings
#52 opened by Terramoto - 2
- 1
- 0
SERVER BUSY, REQUEST QUEUED
#51 opened by CyberRide - 2
run with llama_index
#36 opened by shengkaixuan - 0
- 4
npm error on gpt-llama.cpp
#44 opened by C0deXG - 3
ERR_MODULE_NOT_FOUND
#42 opened by ZERO-A-ONE - 2
- 1
- 5
Unable to run test-installation.sh in ubuntu
#32 opened by BenjiKCF - 1
weird headers error in chatcompletion mode
#38 opened by OracleToes - 1
- 10
Problems on linux
#12 opened by atisharma - 5
How to change the port from 443 to 8000? I am trying to run the setup on my Linux server.
#19 opened by satcit-me - 5
- 1
could we have git tags?
#40 opened by jpetrucciani - 1
- 1
Duplication of capabilities?
#34 opened by das-sein - 1
Add support for AgentGPT
#23 opened by alexl83 - 0
issue with chatbot-ui
#33 opened by gsgoldma - 1
Add support for ChatGPT-Discord-Bot
#30 opened by keldenl - 2
trouble generating a response
#28 opened by gsgoldma - 0
following instructions, get this error
#24 opened by gsgoldma - 1
Add "--mlock" for M1 mac, on routes/chatRoutes.js
#22 opened by m0chael - 2
Issue: Why does (windows cmd) env variable setting work for some but not others?
#20 opened by keldenl - 3
Cannot GET /
#18 opened by intulint - 1
- 0
Add support for LlamaAcademy
#13 opened by Senior-S - 51
- 0
Add support for BabyAGI
#10 opened by keldenl - 19
Server is not working on windows yet
#5 opened by InfernalDread - 2
Add support for ai-code-translator
#3 opened by keldenl