A modern, streamlined interface for running local AI models. Part of the ArtyLLaMa ecosystem.
OllamaUI represents our original vision for a clean, efficient interface to Ollama models. We focus on delivering essential functionality through a lean, stable interface that prioritizes user experience and performance.
-
Simplicity First
- Clean, intuitive interface
- Focus on essential features
- Optimized performance
-
Vision Support
- Full LLaMA 3.2 Vision integration
- Drag-and-drop image analysis
- Support for large vision models
-
Privacy-Focused
- Runs completely locally
- No data collection
- Your models, your control
- Ollama v0.4.0+ running on
http://localhost:11434
- Node.js (LTS)
- Yarn
git clone https://github.com/ArtyLLaMa/OllamaUI.git
cd OllamaUI
yarn install
yarn run dev
Access OllamaUI at http://localhost:3000
Model | VRAM | Features |
---|---|---|
llama3.2-vision | 8GB+ | Image analysis, OCR |
llama3.2-vision:90b | 64GB+ | Enhanced understanding |
ollama pull llama3.2-vision
- Built with SvelteKit & TypeScript
- Styled using TailwindCSS
- Real-time streaming support
- Dark/Light theme
- Mobile-responsive design
OllamaUI is part of a broader ecosystem of AI tools and research:
- ArtyLLaMa - AI-powered creative platform featuring artifact generation and multi-model support
- Kroonen.ai - Independent research in AI systems and computational theory
We welcome contributions that align with our vision of simplicity and efficiency. See our Contributing Guidelines.
MIT License - feel free to use and modify, but please credit the original work.