App crashes on loading local model.
Closed this issue · 7 comments
Dhruvanand24 commented
Using StableLm-2-zephyr-1_6b-q1_1.gguf 6gb ram android 11.
Vali-98 commented
Can you provide a link to the model file used?
Dhruvanand24 commented
Vali-98 commented
It seems to be attempting to access memory outside its buffer, likely a llama.rn issue.
Dhruvanand24 commented
okay, are you sending the whole conversation history as a prompt or just the current message?
Vali-98 commented
The app fills the context with the latest message history until the context limit is reached.
Vali-98 commented
Am I doing something wrong here?
No this is simply how LLMs function, they take a step to consume context and build the KV.
Next time, I would recommend not bringing up unrelated topics. This is an issue tracker not code support forum.
Dhruvanand24 commented
sure