Docker compose failing from the beginning
DutchEllie opened this issue · 5 comments
Expected Behavior
I expected the container to build and the app to start after running docker compose up
.
Current Behavior
Upon starting the container, it immediately crashes.
Here are the logs:
[+] Building 0.0s (0/0)
[+] Running 1/0
✔ Container lollms-webui-webui-1 Created 0.0s
Attaching to lollms-webui-webui-1
lollms-webui-webui-1 | Welcome! It seems this is your first use of the new lollms app.
lollms-webui-webui-1 | To make it clear where your data are stored, we now give the user the choice where to put its data.
lollms-webui-webui-1 | This allows you to mutualize models which are heavy, between multiple lollms compatible apps.
lollms-webui-webui-1 | You can change this at any tome using the lollms-update_path script or by simply change the content of the global_paths_cfg.yaml file.
lollms-webui-webui-1 | Please provide a folder to store your configurations files, your models and your personal data (database, custom personalities etc).
lollms-webui-webui-1 | Traceback (most recent call last):
lollms-webui-webui-1 | File "/srv/app.py", line 1157, in <module>
lollms-webui-webui-1 | lollms_paths = LollmsPaths.find_paths(force_local=True, custom_default_cfg_path="configs/config.yaml")
lollms-webui-webui-1 | File "/usr/local/lib/python3.10/site-packages/lollms/paths.py", line 112, in find_paths
lollms-webui-webui-1 | cfg.lollms_personal_path = input(f"Folder path: ({cfg.lollms_personal_path}):")
lollms-webui-webui-1 | EOFError: EOF when reading a line
lollms-webui-webui-1 exited with code 1
Steps to Reproduce
Please provide detailed steps to reproduce the issue.
- Clone the repo
- Run
docker compose up
- It dies
Context
I guess there is no context, it's Docker, the environment should be set in stone and never changing..
It looks like lollms-webui is expecting a global configuration file in lollms-webui/
called global_paths_cfg.yaml
. It should contain:
- The path to the
lollms
library - The path to a configuration cache for the LLMs and database stuff
lollms_path: /home/user/.local/lib/python3.11/site-packages/lollms
lollms_personal_path: /home/user/.cache/lollms
The lollms_personal_path
will look like this after initialization:
tree ~/.cache/lollms
.
├── configs
│ ├── binding_llamacpp_config.yaml
│ └── local_config.yaml
├── data
├── databases
│ └── database.db
└── models
├── binding_template
├── c_transformers
│ ├── mpt-7b-storywriter.ggmlv3.q5_1.bin
│ └── starcoderplus.ggmlv3.q4_1.bin
├── gpt_4all
│ ├── ggml-gpt4all-j-v1.3-groovy.bin
│ ├── ggml-gpt4all-l13b-snoozy.bin
│ ├── ggml-mpt-7b-chat.bin
│ ├── ggml-mpt-7b-instruct.bin
│ └── ggml-vicuna-13b-1.1-q4_2.bin
├── gpt_j_a
│ ├── ggml-gpt4all-j-v1.3-groovy.bin
│ ├── ggml-gpt4all-l13b-snoozy.bin
│ ├── ggml-mpt-7b-chat.bin
│ ├── ggml-mpt-7b-instruct.bin
│ └── ggml-vicuna-13b-1.1-q4_2.bin
├── gpt_j_m
├── gptq
├── hugging_face
├── llama_cpp_official
│ ├── airoboros-13b-gpt4.ggmlv3.q4_0.bin
│ ├── airoboros-33b-gpt4-1.2.ggmlv3.q2_K.bin
│ ├── Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin
│ └── Wizard-Vicuna-30B-Uncensored.ggmlv3.q4_K_S.bin
├── open_ai
└── py_llama_cpp
├── Manticore-13B.ggmlv3.q4_0.bin
└── Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin
You need to create the models directory manually, and each of the subdirectories, since it doesn't seem to be created automatically:
mkdir models/{py_llama_cpp,c_transformers,llama_cpp_official,binding_template,gpt_j_m,gpt_4all,open_ai,gpt_j_a,gptq,hugging_face}
The file global_paths_cfg.yaml
should probably be mentioned in the README.md file...
I made an attempt to cleanly install this project in a Dockerfile with GPU support, and got it working:
https://gist.github.com/jsjolund/c03089becae815ad6cdd863d1a3f20d4
The Dockerfile is based on nvidia/cuda with Ubuntu and cuDNN. It should be used with the NVIDIA Container Toolkit to enable GPU support in docker.
I have not tested the CUDA capabilities extensively yet.
Hi, this looks cool.
Consider making a pull request if you want to share your work on the main ui. i basically coded the docker with my eyes closed, as I had no access to a docker enabled machine.
Nice job.
I still had this issue
Can you try the last updates?