/ollama_streamlit_demos

Streamlit UI for Ollama that has support for vision and chat models

Primary LanguagePython

🚀 Ollama x Streamlit Playground

This project demonstrates how to run and manage models locally using Ollama by creating an interactive UI with Streamlit.

The app has a page for running chat-based models and also one for nultimodal models (llava and bakllava) for vision.

App in Action

GIF

Check out the video tutorial 👇

Watch the video

Features

  • Interactive UI: Utilize Streamlit to create a user-friendly interface.
  • Local Model Execution: Run your Ollama models locally without the need for external APIs.
  • Real-time Responses: Get real-time responses from your models directly in the UI.

Installation

Before running the app, ensure you have Python installed on your machine. Then, clone this repository and install the required packages using pip:

git clone https://github.com/tonykipkemboi/ollama_streamlit_demos.git
cd ollama_streamlit_demos
pip install -r requirements.txt

Usage

To start the app, run the following command in your terminal:

streamlit run 01_💬_Chat_Demo.py

Navigate to the URL provided by Streamlit in your browser to interact with the app.

NB: Make sure you have downloaded Ollama to your system.

Contributing

Interested in contributing to this app?

  • Great!
  • I welcome contributions from everyone.

Got questions or suggestions?

  • Feel free to open an issue or submit a pull request.

Acknowledgments

👏 Kudos to the Ollama team for their efforts in making open-source models more accessible!