Adding ollama support fails
PlanetMacro opened this issue · 3 comments
I tried to include ollama model into /home/user/.config/ailice/config.json as explained in the readme:
(env) user@debian-ai:~/AIlice$ ailice_main --modelID=ollama:llama2:latest --prompt="main"
config.json is located at /home/user/.config/ailice
In order to simplify installation and usage, we have set local execution as the default behavior, which means AI has complete control over the local environment. To prevent irreversible losses due to potential AI errors, you may consider one of the following two methods: the first one, run AIlice in a virtual machine; the second one, install Docker, use the provided Dockerfile to build an image and container, and modify the relevant configurations in config.json. For detailed instructions, please refer to the documentation.
killing proc with PID 27298
killing proc with PID 27299
killing proc with PID 27302
killing proc with PID 27303
killing proc with PID 27305
killing proc with PID 27308
killing proc with PID 27309
storage started.
browser started.
arxiv started.
google started.
duckduckgo started.
scripter started.
computer started.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
We now start the vector database. Note that this may include downloading the model weights, so it may take some time.
Vector database has been started. returned msg: vector database has been switched to a non-persistent version. tokenizer: bert-base-uncased, model: nomic-ai/nomic-embed-text-v1
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Encountered an exception, AIlice is exiting: 'llama2:latest'
File "/home/user/AIlice/ailice/AIliceMain.py", line 126, in main
mainLoop(**kwargs)
File "/home/user/AIlice/ailice/AIliceMain.py", line 91, in mainLoop
llmPool.Init([modelID])
File "/home/user/AIlice/ailice/core/llm/ALLMPool.py", line 21, in Init
self.pool[id] = MODEL_WRAPPER_MAP[config.models[modelType]["modelWrapper"]](modelType=modelType, modelName=modelName)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/AIlice/ailice/core/llm/AModelChatGPT.py", line 16, in __init__
modelCfg = config.models[modelType]["modelList"][modelName]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
the config.json looks like this:
{
"maxMemory": {},
"quantization": null,
"models": {
"hf": {...},
"peft": {...},
"oai": {...},
"groq": {...},
"mistral": {...},
"ollama": {
"modelWrapper": "AModelChatGPT",
"apikey": "fakekey",
"baseURL": "http://localhost:4000",
"modelList": {
"ollama/llama2:latest": {
"contextWindow": 8192,
"systemAsUser": false
}
}
},
"anthropic": {...}
},
"temperature": 0.0,
"flashAttention2": false,
"speechOn": false,
"contextWindowRatio": 0.6,
"services": {...}
}
Considering your configuration, the appropriate modelID should be:
ollama:ollama/llama2:latest
ok, that makes sense. However it gives another error:
Encountered an exception, AIlice is exiting: 'formatter'
File "/home/user/AIlice/ailice/AIliceMain.py", line 126, in main
mainLoop(**kwargs)
File "/home/user/AIlice/ailice/AIliceMain.py", line 91, in mainLoop
llmPool.Init([modelID])
File "/home/user/AIlice/ailice/core/llm/ALLMPool.py", line 21, in Init
self.pool[id] = MODEL_WRAPPER_MAP[config.models[modelType]["modelWrapper"]](modelType=modelType, modelName=modelName)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/AIlice/ailice/core/llm/AModelChatGPT.py", line 17, in init
self.formatter = CreateFormatter(modelCfg["formatter"], tokenizer = self.tokenizer, systemAsUser = modelCfg['systemAsUser'])
It seems that there was a misleading error in my documentation (strange why such an error would occur).
The correct configuration requires adding a line:
"ollama/llama2:latest": {
"formatter": "AFormatterGPT",
"contextWindow": 8192,
"systemAsUser": false
}