Issues
- 0
Support for Linux (Debian-class or other) ?
#28 opened by farhi - 3
Share link to `ggml-model.bin`
#27 opened by jo-elimu - 2
Does this support GGUF models?
#25 opened by Kansi420 - 1
Support for GGMLv3?
#26 opened by trignomtry - 0
- 1
App crash on mtk 1080+ 8GB
#22 opened by suoko - 3
Does it support GGUF (instead of GGML)?
#20 opened by Duxon - 6
how to make it faster
#6 opened by scrawnyether5669 - 2
Building libllama.so file for Android
#16 opened by abusaadp - 0
- 2
libllama.so is 64-bit instead of 32-bit error when trying to load the model on my Samsung Device
#15 opened by VigneshPasupathy - 1
Is this app support Llama 2?
#17 opened by GaryChen10128 - 0
Support GGML quantitized models
#18 opened by Foul-Tarnished - 3
Model file not working
#14 opened by abusaadp - 1
How to run it on Mac?
#13 opened by realcarlos - 7
It crashes
#7 opened by Asory2010 - 2
Add support for k-quant latest models
#10 opened by x-legion - 2
How does this work?!?!?!?
#8 opened by Leichesters - 1
what is the folder of 'download folder'?
#9 opened by TianRuiHe - 2
Model file too old
#4 opened by minfuel - 42
Generation never starts: "context is null"
#1 opened by sharpy66 - 2
This can work with Vicuna?
#3 opened by NoNamedCat - 1
Context Is null CPU tensor
#2 opened by zeerodark