mattvr/ShellGPT

Add local model

Closed this issue · 1 comments

Please add an option to API into local OpenAI compatible API server (llama.cpp.python[server] for example)

@kirkog86 does being able to change the API URL resolve this?

I just added in 0.3.6 support for the environment variable OPENAI_CHAT_URL.

I'm assuming you could target http://localhost:[your-llama-port] to use it with local models.