ChatGPT-Style Web Interface for Ollama ๐ฆ
Disclaimer: ollama-webui is a community-driven project and is not affiliated with the Ollama team in any way. This initiative is independent, and any inquiries or feedback should be directed to our community on Discord. We kindly request users to refrain from contacting or harassing the Ollama team regarding this project.
Also check our sibling project, OllamaHub, where you can discover, download, and explore customized Modelfiles for Ollama! ๐ฆ๐
-
๐ฅ๏ธ Intuitive Interface: Our chat interface takes inspiration from ChatGPT, ensuring a user-friendly experience.
-
๐ฑ Responsive Design: Enjoy a seamless experience on both desktop and mobile devices.
-
โก Swift Responsiveness: Enjoy fast and responsive performance.
-
๐ Effortless Setup: Install seamlessly using Docker for a hassle-free experience.
-
๐ป Code Syntax Highlighting: Enjoy enhanced code readability with our syntax highlighting feature.
-
โ๏ธ๐ข Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
-
๐ฅ๐๏ธ Download/Delete Models: Easily download or remove models directly from the web UI.
-
๐ค Multiple Model Support: Seamlessly switch between different chat models for diverse interactions.
-
๐ Multi-Modal Support: Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava).
-
๐งฉ Modelfile Builder: Easily create Ollama modelfiles via the web UI. Create and add characters/agents, customize chat elements, and import modelfiles effortlessly through OllamaHub integration.
-
โ๏ธ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
-
๐ค OpenAI Model Integration: Seamlessly utilize OpenAI models alongside Ollama models for a versatile conversational experience.
-
๐ Regeneration History Access: Easily revisit and explore your entire regeneration history.
-
๐ Chat History: Effortlessly access and manage your conversation history.
-
๐ค๐ฅ Import/Export Chat History: Seamlessly move your chat data in and out of the platform.
-
๐ฃ๏ธ Voice Input Support: Engage with your model through voice interactions; enjoy the convenience of talking to your model directly. Additionally, explore the option for sending voice input automatically after 3 seconds of silence for a streamlined experience.
-
โ๏ธ Fine-Tuned Control with Advanced Parameters: Gain a deeper level of control by adjusting parameters such as temperature and defining your system prompts to tailor the conversation to your specific preferences and needs.
-
๐ Auth Header Support: Effortlessly enhance security by adding Authorization headers to Ollama requests directly from the web UI settings, ensuring access to secured Ollama servers.
-
๐ External Ollama Server Connection: Seamlessly link to an external Ollama server hosted on a different address by configuring the environment variable during the Docker build phase. Additionally, you can also set the external server connection URL from the web UI post-build.
-
๐ Backend Reverse Proxy Support: Strengthen security by enabling direct communication between Ollama Web UI backend and Ollama, eliminating the need to expose Ollama over LAN.
-
๐ Continuous Updates: We are committed to improving Ollama Web UI with regular updates and new features.
Don't forget to explore our sibling project, OllamaHub, where you can discover, download, and explore customized Modelfiles. OllamaHub offers a wide range of exciting possibilities for enhancing your chat interactions with Ollama! ๐
If you don't have Ollama installed yet, you can use the provided Docker Compose file for a hassle-free installation. Simply run the following command:
docker compose up -d --build
This command will install both Ollama and Ollama Web UI on your system. Ensure to modify the compose.yaml
file for GPU support and Exposing Ollama API outside the container stack if needed.
Make sure you have the latest version of Ollama installed before proceeding with the installation. You can find the latest version of Ollama at https://ollama.ai/.
After installing Ollama, verify that Ollama is running by accessing the following link in your web browser: http://127.0.0.1:11434/. Note that the port number may differ based on your system configuration.
If Ollama is hosted on your local machine and accessible at http://127.0.0.1:11434/, run the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
Alternatively, if you prefer to build the container yourself, use the following command:
docker build -t ollama-webui .
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway --name ollama-webui --restart always ollama-webui
Your Ollama Web UI should now be hosted at http://localhost:3000 and accessible over LAN (or Network). Enjoy! ๐
Change OLLAMA_API_BASE_URL
environment variable to match the external Ollama Server url:
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
Alternatively, if you prefer to build the container yourself, use the following command:
docker build -t ollama-webui .
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api --name ollama-webui --restart always ollama-webui
While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
Warning: Backend Dependency for Proper Functionality
In order to ensure the seamless operation of our application, it is crucial to run both the backend and frontend components simultaneously. Serving only the frontend in isolation is not supported and may lead to unpredictable outcomes, rendering the application inoperable. Attempting to raise an issue when solely serving the frontend will not be addressed, as it falls outside the intended usage. To achieve optimal results, please strictly adhere to the specified steps outlined in this documentation. Utilize the frontend solely for building static files, and subsequently run the complete application with the provided backend commands. Failure to follow these instructions may result in unsupported configurations, and we may not be able to provide assistance in such cases. Your cooperation in following the prescribed procedures is essential for a smooth user experience and effective issue resolution.
Run the following commands to install:
git clone https://github.com/ollama-webui/ollama-webui.git
cd ollama-webui/
# Copying required .env file
cp -RPp example.env .env
# Building Frontend
npm i
npm run build
# Serving Frontend with the Backend
cd ./backend
pip install -r requirements.txt
sh start.sh
You should have the Ollama Web UI up and running at http://localhost:8080/. Enjoy! ๐
The Ollama Web UI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment using npm run dev
. Alternatively, you can set the PUBLIC_API_BASE_URL
during the build process to have the frontend connect directly to your Ollama instance or build the frontend as static files and serve them with the backend.
-
Clone and Enter the Project:
git clone https://github.com/ollama-webui/ollama-webui.git cd ollama-webui/
-
Create and Edit
.env
:cp -RPp example.env .env
-
Install Node Dependencies:
npm install
-
Run in Dev Mode or Build for Deployment:
-
Dev Mode (requires the backend to be running simultaneously):
npm run dev
-
Build for Deployment:
# `PUBLIC_API_BASE_URL` overwrites the value in `.env` PUBLIC_API_BASE_URL='https://example.com/api' npm run build
-
-
Test the Build with
Caddy
(or your preferred server):curl https://webi.sh/caddy | sh PUBLIC_API_BASE_URL='https://localhost/api' npm run build caddy run --envfile .env --config ./Caddyfile.localhost
If you wish to run the backend for deployment, ensure that the frontend is built so that the backend can serve the frontend files along with the API route.
-
Install Python Requirements:
cd ./backend pip install -r requirements.txt
-
Run Python Backend:
-
Dev Mode with Hot Reloading:
sh dev.sh
-
Deployment:
sh start.sh
-
Now, you should have the Ollama Web UI up and running at http://localhost:8080/. Feel free to explore the features and functionalities of Ollama! If you encounter any issues, please refer to the instructions above or reach out to the community for assistance.
See TROUBLESHOOTING.md for information on how to troubleshoot and/or join our Ollama Web UI Discord community.
Here are some exciting tasks on our roadmap:
- ๐ RAG Integration: Experience first-class retrieval augmented generation support, enabling chat with your documents.
- ๐ Access Control: Securely manage requests to Ollama by utilizing the backend as a reverse proxy gateway, ensuring only authenticated users can send specific requests.
- ๐งช Research-Centric Features: Empower researchers in the fields of LLM and HCI with a comprehensive web UI for conducting user studies. Stay tuned for ongoing feature enhancements (e.g., surveys, analytics, and participant tracking) to facilitate their research.
- ๐ User Study Tools: Providing specialized tools, like heat maps and behavior tracking modules, to empower researchers in capturing and analyzing user behavior patterns with precision and accuracy.
- ๐ Enhanced Documentation: Elevate your setup and customization experience with improved, comprehensive documentation.
Feel free to contribute and help us make Ollama Web UI even better! ๐
A big shoutout to our amazing supporters who's helping to make this project possible! ๐
This project is licensed under the MIT License - see the LICENSE file for details. ๐
If you have any questions, suggestions, or need assistance, please open an issue or join our Ollama Web UI Discord community or Ollama Discord community to connect with us! ๐ค
Created by Timothy J. Baek - Let's make Ollama Web UI even more amazing together! ๐ช