Issues
- 1
Way to kill slow query?
#19 opened by bshor - 0
docker ports are safer on localhost
#18 opened by datapumpernickel - 0
Add load balancing
#17 opened by JBGruber - 1
Text embedding slow compared to Python client
#16 opened by JBGruber - 1
I can't get options(rollama_config) to work
#13 opened by reijmerniek - 1
create_model gives parse error
#11 opened by Arthur-Zestco - 2
Error when pulling models from ollama
#14 opened by sadettindemirel - 0
Model suggestions for certain tasks?
#15 opened by sadettindemirel - 1
chat_history() content is sorted incorrectly
#12 opened by reijmerniek - 1
installation error in R 4.1.2. Error in parse(outFile) : ... unexpected input
#10 opened by SoaresAlisson - 3
Wrap Ollama API endpoints
#1 opened by JBGruber - 0
llama3 was released: make it the default
#9 opened by JBGruber - 4
repeated query return differed results
#8 opened by whweve - 4
Headers for authentication
#7 opened by paluigi - 6
Ollama now supports embedding models
#5 opened by kasperwelbers - 1
Vectorise model parameter
#4 opened by JBGruber - 6
- 0
`model_params` do not seem to work
#2 opened by JBGruber