keldenl/gpt-llama.cpp

Server is not working on windows yet

InfernalDread opened this issue · 19 comments

Hey!

I am very interested in this project, so I wanted to test it out myself. I followed all the instructions and the command prompt does bring up a connection on localhost:443 after launching "npm start", but after trying to access the docs with the URL, there is no connection. I know that this is primarly being run on apple hardware, so take your time with future fixes, just wanted to let you know is all.

can u screenshot what u see? also is anything printed in the terminal? thank u!

interesting.. can you make sure you run npm i again to install dependencies?
also, what do you get if you try to visit localhost:443/v1/models

I will try that now.

Same result for both sadly

that is odd.. do you see any issues in the cmd window?

can you try instead of localhost use your ip address (192.168.0.33:443)

does going to localhost:443 even give you anything? it almost seems like the server isn't running at all..

I can try that. I agree with you though, its almost like the server isn't running lol

Nothing. No issues in the cmd window either, no errors or odd things showing up at all. Odd indeed.

hmmmmmmm.. maybe try opening a new terminal and trying agin? or restarting ur pc? this gotta be the weirdest issue i've seen haha

I guess I can try that too, lets hope for the best! In the meantime, is there any code that you are using that is specific to apple hardware? Or is everything "universally" supported?

i wouldn't think that the issue you're seeing would be due to windows vs mac. i asked chatgpt and it has a couple good suggestions

  1. Make sure your server is actually running and listening on port 443. You can check this by running netstat -an | findstr :443 in your command prompt or terminal. This command will list all the active network connections and you should see a line with 127.0.0.1:443 or 0.0.0.0:443 indicating that your server is listening on port 443.

(i see TCP [::]:443 [::]:0. LISTENING for the server)

  1. Check your firewall settings to make sure that port 443 is open and not blocked. You can try temporarily disabling your firewall to see if that resolves the issue.

Maybe port 443 is blocked. try changing the PORT to something other than 443 in index.js (line 11), maybe try 8000?

const PORT = 8000;

Ok, I have turned back on my PC, lets do this!

Lets gooo. Thank you for the assistance! Do I need to do anything with the docs? Or are the next steps specific to other programs that utilize this api?

image

lets gooo!!!! what was the solution? was it because of the :443 port or did the restart do the job?

docs are a good place to test out the api (you NEED to "AUTHORIZE" for any of the endpoints to work, just throw in the path to the model in there), but otherwise you can start by trying out chatbot-ui's guide (only one i've written so far lol) https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/chatbot-ui-setup-guide.md

Ya, the restart did the trick. How do I authorize? I tried to look into the steps, but I am not too tech savvy lol

EDIT: I have a Vicuna 13B ggml bin file ready to go

Screenshot 2023-04-19 at 2 57 56 PM

click on that button. then paste the path to your model bin. should be something like `C:/blahblahblah/llama.cpp/models/ggml-model.bin`

Ohhhhh, man, please excuse my stupidity LOL

u are good haha

i'm going to go ahead and close this as resolved! tl;dr if anybody else in the future hits this – doesn't seem to be windows specific and a computer restart or cmd window restart should do the trick! thanks!