Issues
- 0
Dependency Dashboard
#136 opened by renovate - 3
is my install busted or --nocontainer broken?
#167 opened by juhp - 9
Create Nvidia Branch for testing and development
#239 opened by bmahabirbu - 7
Pip installation issue
#246 opened by arouene - 2
Using Nvidia GPU can return junk in chatbox
#247 opened by bmahabirbu - 6
Bug in parsing args with `--nocontainer`
#201 opened by Ben-Epstein - 10
Change install path to /var/usrlocal or /usr/local to support Atomic Systems
#177 opened by boredsquirrel - 0
- 1
rpm: huggingface-hub, omlmd dependencies needed
#211 opened by lsm5 - 6
- 4
Apple silicon `illegal hardware instruction`
#204 opened by Ben-Epstein - 4
- 0
Add podman serve --generate compose MODEL which would generate a docker-compose file for running AI Model Service.
#184 opened by rhatdan - 0
Add podman serve --generate kube MODEL
#183 opened by rhatdan - 3
RFE: allow running ramalama from toolbox
#171 opened by juhp - 2
Vision models
#150 opened by p5 - 11
chat mode ordering issue
#168 opened by juhp - 3
Need a way to remove models from local storage.
#22 opened by rhatdan - 4
Implement shortnames
#38 opened by ericcurtin - 20
- 1
- 0
Enable flake8
#125 opened by ericcurtin - 1
Dynamically sized "ramalama list" columns
#119 opened by ericcurtin - 2
A llama.cpp contibution for ramalama
#63 opened by ericcurtin - 7
Consolidate with instructlab container images
#43 opened by ericcurtin - 2
Duplicate function definitions in `cli.py`
#91 opened by swarajpande5 - 3
- 2
Work with k8s yaml
#70 opened by ericcurtin - 4
- 1
What API / spec is this serving / using?
#75 opened by cdrage - 5
Confusion after running serve
#73 opened by cdrage - 2
Unsure if running native or container?
#74 opened by cdrage - 1
macOS native support
#39 opened by ericcurtin - 1
Implement whisper.cpp
#51 opened by ericcurtin - 4
- 3
models as OCI artefacts support
#13 opened by ericcurtin - 1