Project to build a fully locally managed AI Desktop Linux machine
Today, there are several tools available for running LLMs on desktops or servers.
- Via chat
- Via an API
- Some solutions even allow several LLMs to work together in what might look like a RAG, such as LlamaIndex ; This means you can choose which LLMs are open source, and which are best suited to the work you want them to do.
My intention, in this rapidly changing landscape, is to offer a quick way to install machines, then test and use LLMs on the desktop as you wish.
In the system interface itself, but also through dedicated applications; I'd also like to centralize the LLMs downloaded by each of the solutions, and offer gateways so you can use them with the others.
Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.
- Project WebSite
- Version for Cuda (NVIDIA)
- Version for Rocm (AMD)
- Documentation
- GithHub