ParisNeo/lollms-webui

start with docker-compose fails

antirek opened this issue · 5 comments

Expected Behavior

All run ))

Current Behavior

Not run ((

Steps to Reproduce

  1. git clone repo
  2. docker-compose build
  3. docker-compose up

Error

$ sudo docker-compose up 
Starting gpt4all-ui_webui_1 ... done
Attaching to gpt4all-ui_webui_1
webui_1  | Traceback (most recent call last):
webui_1  |   File "/srv/app.py", line 809, in <module>
webui_1  | Personality file not found. Please verify that the personality you have selected exists or select another personality. Some updates may lead to change in personality name or category, so check the personality selection in settings to be sure.
webui_1  | Checking discussions database...
webui_1  |     bot = Gpt4AllWebUI(app, socketio, config, personality, config_file_path)
webui_1  |   File "/srv/app.py", line 62, in __init__
webui_1  |     super().__init__(config, personality, config_file_path)
webui_1  |   File "/srv/pyGpt4All/api.py", line 53, in __init__
webui_1  |     self.chatbot_bindings = self.create_chatbot()
webui_1  |   File "/srv/pyGpt4All/api.py", line 73, in create_chatbot
webui_1  |     return self.backend(self.config)
webui_1  |   File "/srv/backends/llama_cpp/__init__.py", line 32, in __init__
webui_1  |     self.model = Model(
webui_1  |   File "/usr/local/lib/python3.10/site-packages/pyllamacpp/model.py", line 73, in __init__
webui_1  |     raise Exception(f"File {model_path} not found!")
webui_1  | Exception: File ./models/llama_cpp/gpt4all-lora-quantized-ggml.bin not found!
gpt4all-ui_webui_1 exited with code 1

@antirek was the docker build working for you?

you need to download model first, as it dont do it on its own, you have to choose how you download it..

you need to download model first, as it dont do it on its own, you have to choose how you download it..

Ok, @andzejsp, great thanks!
I don't understand it from readme.md and webui's error don't say me "download it independently" ))

@andzejsp
Oh, I see that webui.sh contain lines code which load gpt model, but in Dockerfile independent install code. May be use webui.sh for Dockerfile?

i dont think that webui.sh will work in dockerfile, cuz dockerphile loads assets from python directly, but the webui.sh has many different checks to see if you have python venv and other stuff.. also the webui.sh gives option to download default model. dockerfile does not, so you have to download any of the models https://github.com/nomic-ai/gpt4all-ui#llama_cpp-models and put in models/llama_cpp folder, mount the folder in docker and youre golden..