A simple Python-based GUI application that allows you to interact with local Large Language Models through Ollama. This application provides a chat interface with multiple tabs for different conversations.
- Multiple chat tabs support
- Local LLM integration via Ollama
- Simple and intuitive interface
- Real-time responses
- Error handling and status messages
Before using this application, you need to:
-
Install Ollama on your system
- Visit Ollama's website to download and install
- Make sure the Ollama service is running
-
Install Python dependencies:
pip install -r requirements.txt
-
Clone the repository:
git clone [your-repository-url] cd ChatGPT-GUI -
Install dependencies:
pip install -r requirements.txt
-
Start Ollama service on your system
-
Run the application:
python chatGPT.py
-
Using the Interface:
- Click "New Chat" to open a new chat tab
- Type your message and press Enter to send
- Use "Delete Chat" to remove the current tab
- The default model is "llama2" but can be modified in the code
The application is built using:
- Python 3.x
- Tkinter for GUI
- Ollama for local LLM integration
- Asynchronous request handling
ChatGPT-GUI/
├── chatGPT.py # Main GUI application
├── requestLocal.py # Ollama integration
└── requirements.txt # Python dependencies
The application handles common errors including:
- Ollama service not running
- Model not found/not downloaded
- Invalid inputs
- Connection issues
-
If you get "Ollama server is not running" error:
- Check if Ollama is installed
- Verify Ollama service is running
- Restart Ollama service
-
If model responses are slow:
- Check your system resources
- Consider using a lighter model
Feel free to submit issues and enhancement requests!