keldenl/gpt-llama.cpp

Error: spawn ..\llama.cpp\main ENOENT at ChildProcess._handle.onexit

lzbeefnoodle opened this issue · 1 comments

Really good project to adopt local GPT, thanks all for the effort!
I tried different models on w11, there are two models work fine with command line like ”main -m -p “, they are ggml-vic13b-uncensored-q4_0.bin and ggml-vic13b-q4_0.bin, still got errors "libc++abi: terminating due to uncaught exception of type std::runtime_error: unexpectedly reached end of file." for other models. Continue to run test-application.sh with server together, got errors below:

Message from test-application.sh (client terminal):
--GPT-LLAMA.CPP TEST INSTALLATION SCRIPT LAUNCHED--
PLEASE MAKE SURE THAT A LOCAL GPT-LLAMA.CPP SERVER IS STARTED. OPEN A SEPARATE TERMINAL WINDOW START IT.\n
What port is your server running on? (press enter for default 443 port): 8000
Please drag and drop the location of your Llama-based Model (.bin) here and press enter:
../llama.cpp/models/ggml-vic13b-uncensored-q4_0.bin

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 266 0 0 100 266 0 4977 --:--:-- --:--:-- --:--:-- 5018
curl: (56) Recv failure: Connection was reset

Error: Curl command failed!
Is the gpt-llama.cpp server running? Try starting the server and running this script again.
Make sure you are testing on the right port. The Curl commmand server error port should match your port in the gpt-llama.cpp window.
Please check for any errors in the terminal window running the gpt-llama.cpp server.

Message from server terminal:

REQUEST RECEIVED
PROCESSING NEXT REQUEST FOR /v1/chat/completions
LLAMA.CPP DETECTED

===== CHAT COMPLETION REQUEST =====

AUTO MODEL DETECTION FAILED. LOADING DEFAULT CHATENGINE...
{}

===== LLAMA.CPP SPAWNED =====
..\llama.cpp\main -m ..\llama.cpp\models\ggml-vic13b-uncensored-q4_0.bin --temp 0.7 --n_predict 1000 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt

-i -p Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a helpful assistant developed by OpenAI.
user: How are you doing today?
assistant:

===== REQUEST =====
user: How are you doing today?
node:events:491
throw er; // Unhandled 'error' event
^

Error: spawn ..\llama.cpp\main ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:289:12)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn ..\llama.cpp\main',
path: '..\llama.cpp\main',
spawnargs: [
'-m',
'..\llama.cpp\models\ggml-vic13b-uncensored-q4_0.bin',
'--temp',
'0.7',
'--n_predict',
'1000',
'--top_p',
'0.1',
'--top_k',
'40',
'-c',
'2048',
'--seed',
'-1',
'--repeat_penalty',
'1.1764705882352942',
'--reverse-prompt',
'user:',
'--reverse-prompt',
'\nuser',
'--reverse-prompt',
'system:',
'--reverse-prompt',
'\nsystem',
'--reverse-prompt',
'\n\n\n',
'-i',
'-p',
'Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.\n' +
'\n' +
'system: You are a helpful assistant.\n' +
'user: How are you?\n' +
'assistant: Hi, how may I help you today?\n' +
'system: You are ChatGPT, a helpful assistant developed by OpenAI.\n' +
'user: How are you doing today?\n' +
'assistant:'
]
}

Node.js v18.16.0

The problem is fixed, the reason is the path of main.exe under llama.cpp/build/bin , should move it to llama.cpp.