OllamaGPT is a user-friendly interface for interacting with Ollama, a local AI service. This project uses Streamlit to create an interactive chat interface that communicates with the Ollama API.
- Intuitive chat interface
- Utilization of locally installed Ollama models
- Display of the currently used model
- Error handling for Ollama server connection issues
- Python 3.6+
- Ollama installed and configured on your local machine and at least one model downloaded
- The following Python libraries:
- streamlit
- requests
- subprocess
-
Clone this repository:
git clone https://github.com/toine08/ollamaGPT.git cd ollamaGPT
-
Install the dependencies:
pip install -r requirements.txt
-
(optional) Ensure Ollama is running on your local machine. You can start it by running:
ollama serve
-
Launch the Streamlit application:
streamlit run app.py
-
Start chatting with the Ollama AI through the user interface!
- The application assumes the Ollama server is running on
http://127.0.0.1:11434
. If your setup is different, please adjust thebaseURL
variable in the code. - If the Ollama server is not accessible, the application will display an error message.
Contributions to this project are welcome. Feel free to open an issue or submit a pull request.