alexpinel/Dot

Permanent spinner when attempting to prompt

shawnyeager opened this issue · 14 comments

v1 of Dot (download), macOS 14.4 (23E214). This occurs in both small and big models.

CleanShot 2024-03-17 at 11 30 55@2x

Yep got the same on Windows. Just doesn't work.

Interesting, are you using the standalone app or did you clone the repo? You can press 'Ctrl + Shift + I' to open the dev tools, sometimes there is this error where if the answer is too long it gets truncated and the communication between the backend and frontend doesn't quite work. If you are running the app in dev mode you should alse be able to see error logs in the terminal

Standalone. I've run into several other problems. I don't think this is a v1 release. I'll keep an eye out for the next release. The idea is promising.

Interesting, are you using the standalone app or did you clone the repo? You can press 'Ctrl + Shift + I' to open the dev tools, sometimes there is this error where if the answer is too long it gets truncated and the communication between the backend and frontend doesn't quite work. If you are running the app in dev mode you should alse be able to see error logs in the terminal

Standalone. I tried the console -
console

It's a single page .doc file that I'm trying to read as a test.

hmmmm that error is not related to the LLM itself so it cannot be the source of the issue, when loading the doc did it appear immediatelly in the left bar or did it take some time?

It takes around 5 seconds to load the doc after I select the folder. I can't seem to select an individual file in that folder though.

Seeing the same thing with a multi-page PDF in the Windows standalone app.

Edit: I see this error appear during the file upload process, before any questions are asked.

I'm seeing the same issue. M1 mac with 8gb ram; MacOS 14.3.1. I'm able to add documents but never get a response in the chat. Using the standalone app.

It looks like a python process starts up, consumes ram, and gets killed
Screenshot 2024-04-09 at 10 33 21 AM

That looks like the app is trying to load the LLM but cannot allocate enough RAM so it kills the process. Do you happen to know if Big Dot works?

I haven't properly tested Dot on 8gb RAM devices, but there are some things that might help solve the issue. If you right click on the Dot icon on the applications folder you can reveal the package contents of the app. From there you can navigate to Contents/Resources/llm/scripts where you should find the three python scripts used to handle the LLM. You can find the following lines in bigdot.py and something very similar in docdot.py:
Screenshot 2024-04-09 at 17 13 42

From here you can try lowering n_batch to 1 and n_ctx to a lower value like 2000. This will affect the performance and quality of the app but should also reduce the RAM consumption (i'm not entirely sure it will be enough though!). At the same time, you should also adapt the chunk size in embeddings.py to match the new context window set in n_ctx:
Screenshot 2024-04-09 at 17 23 21

If you set n_ctx as 2000 you can adjust chunk_size to 2000 and chunk_overlap to 1000.]

Please let me know if that helps!

Same issue, in MacOS M3, 64GB RAM, no luck... "Dot is typing..."

Still an issue running 0.9.2 CPU version on Windows 11. Looks like the Python process is starting and almost immediately exiting.
Are there any logs anywhere to try and diagnose?