Vali-98/ChatterUI

App crashes on loading local model.

Closed this issue · 7 comments

Using StableLm-2-zephyr-1_6b-q1_1.gguf 6gb ram android 11.

Can you provide a link to the model file used?

It seems to be attempting to access memory outside its buffer, likely a llama.rn issue.

okay, are you sending the whole conversation history as a prompt or just the current message?

The app fills the context with the latest message history until the context limit is reached.

Am I doing something wrong here?

No this is simply how LLMs function, they take a step to consume context and build the KV.

Next time, I would recommend not bringing up unrelated topics. This is an issue tracker not code support forum.