containers/ai-lab-recipes
Examples for building and running LLM services and applications locally with Podman
PythonApache-2.0
Issues
- 0
- 1
'make amd' fails with "Error: short-name resolution enforced but cannot prompt without a TTY"
#592 opened - 1
'make amd' fails in step 6/16
#576 opened - 0
Remove Deepspeed and VLLM
#574 opened - 0
Error building bootc container
#568 opened - 0
Add BASEIMAGE variable amd
#567 opened - 0
Add BASEIMAGE variable Intel
#566 opened - 3
- 0
- 2
Typo in llama-cpp-server readme
#553 opened - 7
llama-cpp-server broken
#547 opened - 2
chatbot recipe broken
#546 opened - 4
build a RHEL based Milvus
#538 opened - 0
arm testing framework broken
#530 opened - 3
- 1