Mintplex-Labs/anything-llm

[BUG]: White screen once accessing llm-preference

Closed this issue · 4 comments

How are you running AnythingLLM?

Not listed

What happened?

Hello, I am using the anythingLLM render docker image and deploying it through k8s. Whenever I enter the settings menu and navigate to AI Providers -> LLM the web page turns white. Opening any other setting window goes fine without any issue.
In a separate workspace I have an 22 days old anythingLLM render docker image that does not have this issue.

Are there known steps to reproduce?

  1. Use mintplexlabs/anythingllm:render as image for your k8s deployment.
  2. Navigate to the webpage (chrome or edge)
  3. Navigate to Settings -> AI Providers -> LLM
  4. Screen turns white

You dont have the ENV file set up correctly - which is why you are getting that crash.

Also why are you using the render image on K8s? That docker image is for Render.com/Railway.app - otherwise you should be using latest

Also there is a k8 manifest
https://github.com/Mintplex-Labs/anything-llm/blob/master/cloud-deployments/k8/manifest.yaml

The initial deployment I had is based of the k8s manifest you shared and it used to work.

I think the decision to use the render image is correct as that manifest also uses the render image:
image: "mintplexlabs/anythingllm:render" as such I did the same.

To which ENV file are you referring to? The only ENV variables I can find in the manifest have been set up identically in my deployment are there any other ENV files that need to be set up?

You did not add anything to this section?

- name: AWS_REGION
            value: "{{ aws_region }}"
          - name: AWS_ACCESS_KEY_ID
            value: "{{ aws_access_id }}"
          - name: AWS_SECRET_ACCESS_KEY
            value: "{{ aws_access_secret }}"
          - name: SERVER_PORT
            value: "3001"
          - name: JWT_SECRET
            value: "my-random-string-for-seeding" # Please generate random string at least 12 chars long.
          - name: STORAGE_DIR
            value: "/storage"
          - name: NODE_ENV
            value: "production"
          - name: UID
            value: "1000"
          - name: GID
            value: "1000"

If so, what should happen is on deployment is that there should be an .env file in this mount

volumeMounts: 
         - name: anything-llm-server-storage-volume-mount
           mountPath: /storage            

That should have all the current settings for the ENV of the system on startup. I dont know what your container logs look like as that would provide more context here. These templates are community created so I cannot assure they work - I know some people had success with them so they are published for that reason.

I have solved the issue!
Originally I was planning to enter the LLM through the settings but this generated a white screen. So instead I configured the LLM through environmental variables. This has solved the issue as the white screen does not appear anymore and I am freely able to change the LLM afterwards through the GUI. Could it be that there is no default LLM set in the render version causing the error?