ParisNeo/lollms-webui

DockerCompose runtime Python Exception: ModuleNotFoundError: No module named 'pyGpt4All.backends'

MarkusGnigler opened this issue · 14 comments

Expected Behavior

DockerCompose should start seamless.

Current Behavior

Container start throws python exception:

Attaching to gpt4all-ui_webui_1
webui_1  | Traceback (most recent call last):
webui_1  |   File "/srv/app.py", line 40, in <module>
webui_1  |     from pyGpt4All.api import GPT4AllAPI
webui_1  |   File "/srv/pyGpt4All/api.py", line 15, in <module>
webui_1  |     from pyGpt4All.backends import BACKENDS_LIST
webui_1  | ModuleNotFoundError: No module named 'pyGpt4All.backends'
gpt4all-ui_webui_1 exited with code 1

I think it is releated to issue #58 or at least has the same behavior

Thanks for notifying me.
Try to pull the changes and test again

Thanks for the quick reply!

Unfortunately it is not fixed.
Are my models folders correct?

image

Seems to build when copying the pyGpt4all folder only

COPY ./pyGpt4All/ /srv/pyGpt4All

I see a little problem here. In last version we support multiple backends. So we added subfolders to put the right model in the right backend. For llamacpp models use llama_cpp folder znd for gpt-j models use gpt_j. Please copy your model to llama_cpp subfolder.

I see a little problem here. In last version we support multiple backends. So we added subfolders to put the right model in the right backend. For llamacpp models use llama_cpp folder znd for gpt-j models use gpt_j. Please copy your model to llama_cpp subfolder.

Exception still occurs even with the right environment - my only workaround is to copy the entire pyGpt4All directory.

Any reason why backend is on a different path structure during the copy task than the other files in pyGpt4All?

[ 7/15] COPY ./pyGpt4All/api.py /srv/pyGpt4All/api.py 0.2s
=> [ 8/15] COPY ./pyGpt4All/db.py /srv/pyGpt4All/db.py 0.2s
=> [ 9/15] COPY ./pyGpt4All/config.py /srv/pyGpt4All/config.py 0.2s
=> [10/15] COPY ./pyGpt4All/extension.py /srv/pyGpt4All/extension.py 0.2s
=> [11/15] COPY ./pyGpt4All/backends/backend.py /srv/backends/backend.py 0.2s
=> [12/15] COPY ./pyGpt4All/backends/llama_cpp.py /srv/backends/llama_cpp.py 0.2s
=> [13/15] COPY ./pyGpt4All/backends/gpt_j.py /srv/backends/gpt_j.py 0.2s

image

You are right. Thanks for pointing out this error.
Test the new version

Fixed, docker start's. Thanks for the prompt help!

Sorry for commenting on this one but the issue seems to still persist
Just downloaded the latest version of this repo
Files have been placed within models/llama_cpp

ls models/llama_cpp/ gpt4all-lora-quantized-ggml.bin gpt4all-lora-unfiltered-quantized.new.bin

when performing docker-compose up I still receive

Starting gpt4all-webui_webui_1 ... done
Attaching to gpt4all-webui_webui_1
webui_1  | Traceback (most recent call last):
webui_1  |   File "/srv/app.py", line 40, in <module>
webui_1  |     from pyGpt4All.api import GPT4AllAPI
webui_1  |   File "/srv/pyGpt4All/api.py", line 15, in <module>
webui_1  |     from pyGpt4All.backends import BACKENDS_LIST
webui_1  | ModuleNotFoundError: No module named 'pyGpt4All.backends'
gpt4all-webui_webui_1 exited with code 1

Am I missing anything else?
Thx

Sorry for commenting on this one but the issue seems to still persist Just downloaded the latest version of this repo Files have been placed within models/llama_cpp

ls models/llama_cpp/ gpt4all-lora-quantized-ggml.bin gpt4all-lora-unfiltered-quantized.new.bin

when performing docker-compose up I still receive

Starting gpt4all-webui_webui_1 ... done
Attaching to gpt4all-webui_webui_1
webui_1  | Traceback (most recent call last):
webui_1  |   File "/srv/app.py", line 40, in <module>
webui_1  |     from pyGpt4All.api import GPT4AllAPI
webui_1  |   File "/srv/pyGpt4All/api.py", line 15, in <module>
webui_1  |     from pyGpt4All.backends import BACKENDS_LIST
webui_1  | ModuleNotFoundError: No module named 'pyGpt4All.backends'
gpt4all-webui_webui_1 exited with code 1

Am I missing anything else? Thx

You can try to rename the yaml file and download it again?

$ mv docker-compose.yml docker-compose.bk
then
$ git checkout docker-compose.yml
$ docker compose -f docker-compose.yml up

Sorry but you don't seem to use the latest code. The BACKENDS_LIST has been deproicated for more than a week now.

Please pull the entire project and many things have changed.

Just tried that and the same

git clone https://github.com/ParisNeo/Gpt4All-webui.git
Cloning into 'Gpt4All-webui'...
remote: Enumerating objects: 797, done.
remote: Counting objects: 100% (385/385), done.
remote: Compressing objects: 100% (218/218), done.
remote: Total 797 (delta 221), reused 286 (delta 144), pack-reused 412
Receiving objects: 100% (797/797), 1.30 MiB | 8.23 MiB/s, done.
Resolving deltas: 100% (423/423), done.
Updating files: 100% (99/99), done.

cd Gpt4All-webui/

#just in case
git pull
Already up to date.

#moved the models into the llama folder
mv ../gpt4all-lora-* models/llama_cpp/

#tried it again
docker-compose -f docker-compose.yml  up
Starting gpt4all-webui_webui_1 ... done
Attaching to gpt4all-webui_webui_1
webui_1  | Traceback (most recent call last):
webui_1  |   File "/srv/app.py", line 40, in <module>
webui_1  |     from pyGpt4All.api import GPT4AllAPI
webui_1  |   File "/srv/pyGpt4All/api.py", line 15, in <module>
webui_1  |     from pyGpt4All.backends import BACKENDS_LIST
webui_1  | ModuleNotFoundError: No module named 'pyGpt4All.backends'
gpt4all-webui_webui_1 exited with code 1

I see the problem.:
You are using:
git clone https://github.com/ParisNeo/Gpt4All-webui.git
you must use:
https://github.com/nomic-ai/gpt4all-ui.git

Here is the api.py file, it doesn't import the backend anymore:
image

Cloned the new repo but now is giving error 132 and not much verbosity as in what does it mean

based on my "google research", could this be related to my CPU
https://codesti.com/issue/nomic-ai/gpt4all-ui/69
RasaHQ/rasa#4602 (comment)

git clone https://github.com/nomic-ai/gpt4all-ui.git
Cloning into 'gpt4all-ui'...
remote: Enumerating objects: 1767, done.
remote: Counting objects: 100% (761/761), done.
remote: Compressing objects: 100% (278/278), done.
remote: Total 1767 (delta 502), reused 566 (delta 451), pack-reused 1006
Receiving objects: 100% (1767/1767), 2.04 MiB | 11.50 MiB/s, done.
Resolving deltas: 100% (1105/1105), done.
Updating files: 100% (97/97), done.

mv gpt4all-lora-* gpt4all-ui/models/llama_cpp

 docker-compose -f docker-compose.yml  up
Creating network "gpt4all-ui_default" with the default driver
Building webui
[+] Building 188.5s (15/15) FINISHED
 => [internal] load build definition from Dockerfile                                                                                 0.1s
 => => transferring dockerfile: 533B                                                                                                 0.0s
 => [internal] load .dockerignore                                                                                                    0.1s
 => => transferring context: 2B                                                                                                      0.0s
 => [internal] load metadata for docker.io/library/python:3.10                                                                       7.3s
 => [ 1/10] FROM docker.io/library/python:3.10@sha256:f12d5573aa14fafc4b86ac76726fabdd1216e03f2dbd82e10406f302677a3754               0.0s
 => [internal] load build context                                                                                                    0.2s
 => => transferring context: 20.68MB                                                                                                 0.2s
 => CACHED [ 2/10] WORKDIR /srv                                                                                                      0.0s
 => [ 3/10] COPY ./requirements.txt .                                                                                                0.2s
 => [ 4/10] RUN python3 -m venv venv && . venv/bin/activate                                                                          2.7s
 => [ 5/10] RUN python3 -m pip install --no-cache-dir -r requirements.txt --upgrade pip                                            155.9s
 => [ 6/10] COPY ./app.py /srv/app.py                                                                                                0.1s
 => [ 7/10] COPY ./pyGpt4All /srv/pyGpt4All                                                                                          0.1s
 => [ 8/10] COPY ./backends /srv/backends                                                                                            0.1s
 => [ 9/10] COPY ./static /srv/static                                                                                                0.1s
 => [10/10] COPY ./templates /srv/templates                                                                                          0.1s
 => exporting to image                                                                                                              21.7s
 => => exporting layers                                                                                                             21.7s
 => => writing image sha256:c98f72d3767f8a0ba35b585ce3682fe231b2100554e0c62050def11c563f2bcc                                         0.0s
 => => naming to docker.io/library/gpt4all-ui_webui                                                                                  0.0s
WARNING: Image for service webui was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating gpt4all-ui_webui_1 ... done
Attaching to gpt4all-ui_webui_1
gpt4all-ui_webui_1 exited with code 132

gpt4all-ui$ docker-compose build

Building webui
[+] Building 0.4s (15/15) FINISHED
 => [internal] load build definition from Dockerfile                                                                                 0.1s
 => => transferring dockerfile: 533B                                                                                                 0.0s
 => [internal] load .dockerignore                                                                                                    0.1s
 => => transferring context: 2B                                                                                                      0.0s
 => [internal] load metadata for docker.io/library/python:3.10                                                                       0.2s
 => [ 1/10] FROM docker.io/library/python:3.10@sha256:f12d5573aa14fafc4b86ac76726fabdd1216e03f2dbd82e10406f302677a3754               0.0s
 => [internal] load build context                                                                                                    0.1s
 => => transferring context: 2.55kB                                                                                                  0.1s
 => CACHED [ 2/10] WORKDIR /srv                                                                                                      0.0s
 => CACHED [ 3/10] COPY ./requirements.txt .                                                                                         0.0s
 => CACHED [ 4/10] RUN python3 -m venv venv && . venv/bin/activate                                                                   0.0s
 => CACHED [ 5/10] RUN python3 -m pip install --no-cache-dir -r requirements.txt --upgrade pip                                       0.0s
 => CACHED [ 6/10] COPY ./app.py /srv/app.py                                                                                         0.0s
 => CACHED [ 7/10] COPY ./pyGpt4All /srv/pyGpt4All                                                                                   0.0s
 => CACHED [ 8/10] COPY ./backends /srv/backends                                                                                     0.0s
 => CACHED [ 9/10] COPY ./static /srv/static                                                                                         0.0s
 => CACHED [10/10] COPY ./templates /srv/templates                                                                                   0.0s
 => exporting to image                                                                                                               0.0s
 => => exporting layers                                                                                                              0.0s
 => => writing image sha256:c98f72d3767f8a0ba35b585ce3682fe231b2100554e0c62050def11c563f2bcc                                         0.0s
 => => naming to docker.io/library/gpt4all-ui_webui                                                                                  0.0s

gpt4all-ui$ docker-compose -f docker-compose.yml  up

Starting gpt4all-ui_webui_1 ... done
Attaching to gpt4all-ui_webui_1
gpt4all-ui_webui_1 exited with code 132
docker-compose --verbose up
compose.config.config.find: Using configuration files: ./docker-compose.yml
compose.cli.docker_client.get_client: docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019
compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost
compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.1', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-02-09T19:46:56.000000000+00:00', 'Experimental': 'false', 'GitCommit': 'bc3805a', 'GoVersion': 'go1.19.5', 'KernelVersion': '5.4.0-144-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.6.18', 'Details': {'GitCommit': '2456e983eb9e37e47538f59ea18f2043c9a73640'}}, {'Name': 'runc', 'Version': '1.1.4', 'Details': {'GitCommit': 'v1.1.4-0-g5fd4c4d'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.1, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=bc3805a, GoVersion=go1.19.5, Os=linux, Arch=amd64, KernelVersion=5.4.0-144-generic, BuildTime=2023-02-09T19:46:56.000000000+00:00
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('gpt4allui_default')
compose.cli.verbose_proxy.proxy_callable: docker info <- ()
compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64',
 'BridgeNfIp6tables': True,
 'BridgeNfIptables': True,
 'CPUSet': True,
 'CPUShares': True,
 'CgroupDriver': 'cgroupfs',
 'CgroupVersion': '1',
 'ContainerdCommit': {'Expected': '2456e983eb9e37e47538f59ea18f2043c9a73640',
                      'ID': '2456e983eb9e37e47538f59ea18f2043c9a73640'},
 'Containers': 17,
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('gpt4all-ui_default')
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {'Attachable': True,
 'ConfigFrom': {'Network': ''},
 'ConfigOnly': False,
 'Containers': {},
 'Created': '2023-05-01T18:57:24.943739952-04:00',
 'Driver': 'bridge',
 'EnableIPv6': False,
 'IPAM': {'Config': [{'Gateway': '192.168.80.1', 'Subnet': '192.168.80.0/20'}],
          'Driver': 'default',
          'Options': None},
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=gpt4all-ui', 'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=gpt4allui', 'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=gpt4all-ui', 'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': 'docker-default',
 'Args': ['app.py',
          '--host',
          '0.0.0.0',
          '--port',
          '9600',
          '--db_path',
          'data/database.db'],
 'Config': {'AttachStderr': False,
            'AttachStdin': False,
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=gpt4all-ui', 'com.docker.compose.service=webui', 'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': 'docker-default',
 'Args': ['app.py',
          '--host',
          '0.0.0.0',
          '--port',
          '9600',
          '--db_path',
          'data/database.db'],
 'Config': {'AttachStderr': False,
            'AttachStdin': False,
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('gpt4all-ui_webui')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64',
 'Author': '',
 'Comment': 'buildkit.dockerfile.v0',
 'Config': {'ArgsEscaped': True,
            'AttachStderr': False,
            'AttachStdin': False,
            'AttachStdout': False,
            'Cmd': ['python',
                    'app.py',
                    '--host',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=gpt4all-ui', 'com.docker.compose.service=webui', 'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('gpt4all-ui_webui')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64',
 'Author': '',
 'Comment': 'buildkit.dockerfile.v0',
 'Config': {'ArgsEscaped': True,
            'AttachStderr': False,
            'AttachStdin': False,
            'AttachStdout': False,
            'Cmd': ['python',
                    'app.py',
                    '--host',
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': 'docker-default',
 'Args': ['app.py',
          '--host',
          '0.0.0.0',
          '--port',
          '9600',
          '--db_path',
          'data/database.db'],
 'Config': {'AttachStderr': False,
            'AttachStdin': False,
...
compose.parallel.feed_queue: Pending: {<Service: webui>}
compose.parallel.feed_queue: Starting producer thread for <Service: webui>
Starting gpt4all-ui_webui_1 ...
compose.parallel.feed_queue: Pending: {<Container: gpt4all-ui_webui_1 (fb22c7)>}
compose.parallel.feed_queue: Starting producer thread for <Container: gpt4all-ui_webui_1 (fb22c7)>
compose.cli.verbose_proxy.proxy_callable: docker attach <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25', stdout=True, stderr=True, stream=True)
compose.cli.verbose_proxy.proxy_callable: docker attach -> <docker.types.daemon.CancellableStream object at 0x7fe560d663d0>
compose.cli.verbose_proxy.proxy_callable: docker start <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25')
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.cli.verbose_proxy.proxy_callable: docker start -> None
Starting gpt4all-ui_webui_1 ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service: webui>
compose.parallel.feed_queue: Pending: set()
Attaching to gpt4all-ui_webui_1
compose.cli.verbose_proxy.proxy_callable: docker events <- (filters={'label': ['com.docker.compose.project=gpt4all-ui', 'com.docker.compose.oneoff=False']}, decode=True)
compose.cli.verbose_proxy.proxy_callable: docker events -> <docker.types.daemon.CancellableStream object at 0x7fe561dd3f50>
compose.cli.verbose_proxy.proxy_callable: docker wait <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25')
compose.cli.verbose_proxy.proxy_callable: docker wait -> {'StatusCode': 132}
gpt4all-ui_webui_1 exited with code 132
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('fb22c7758f5ee59cc5560b77f918750e30f2306848aaf289ab1dd3e19546cc25')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': 'docker-default',
 'Args': ['app.py',
          '--host',
          '0.0.0.0',
          '--port',
          '9600',
          '--db_path',
          'data/database.db'],
 'Config': {'AttachStderr': False,
            'AttachStdin': False,

I am using proxmox, had to set the CPU type to be host
https://www.techaddressed.com/tutorials/proxmox-improve-vm-cpu-perf/