Cannot setup local models
clement-igonet opened this issue · 1 comments
clement-igonet commented
I use a local rift server with this command:
python -m rift.server.core --host 0.0.0.0 --port 7797 --debug True
Here are start of logs:
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
██████╗ ██╗███████╗████████╗
██╔══██╗██║██╔════╝╚══██╔══╝
██████╔╝██║█████╗ ██║
██╔══██╗██║██╔══╝ ██║
██║ ██║██║██║ ██║
╚═╝ ╚═╝╚═╝╚═╝ ╚═╝
__
by ____ ___ ____ _________ / /_
/ __ `__ \/ __ \/ ___/ __ \/ __ \
/ / / / / / /_/ / / / /_/ / / / /
/_/ /_/ /_/\____/_/ / .___/_/ /_/
/_/
[17:32:08] INFO starting Rift server on 0.0.0.0:7797 core.py:171
DEBUG Using selector: EpollSelector selector_events.py:54
INFO <Server base_events.py:1546
sockets=(<asyncio.TransportSocket fd=6,
family=AddressFamily.AF_INET,
type=SocketKind.SOCK_STREAM, proto=6,
laddr=('0.0.0.0', 7797)>,)> is serving
I got this error message:
ERROR Trying to create an OpenAIClient without an create.py:100
OpenAI key set in Rift settings or set as the
OPENAI_API_KEY environment variable.
However, my settings.json file contains this:
"rift.autostart": false,
"rift.chatModel": "llama:llama:llama2-7b @ /models/llama-2-7b.Q5_K_S.gguf",
"rift.codeEditModel": "llama:codellama-7b-instruct @ /models/codellama-7b-instruct.Q2_K.gguf"
With models I've downloaded on https://huggingface.co/ :
- https://huggingface.co/TheBloke/Llama-2-7B-GGUF
- https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF
It seams my settings.json setup is not taken into account.
Any idea of what else to check or where I'm wrong ?
Is there a way to force the server (by command line options) to load my models ?
Harrolee commented
+1