This repository provides instructions and code snippets for using Ollama in Google Colab notebooks.
To install Ollama in your Colab environment, follow these steps:
-
Run the following command in a code cell to install the required dependencies:
! sudo apt-get install -y pciutils
-
Run the installation script provided by Ollama:
! curl https://ollama.ai/install.sh | sh
-
Import the necessary libraries and define the Ollama function:
import os import threading import subprocess import requests import json def ollama(): os.environ['OLLAMA_HOST'] = '0.0.0.0:11434' os.environ['OLLAMA_ORIGINS'] = '*' subprocess.Popen(["ollama", "serve"])
Once Ollama is installed, you can use it in your Colab notebook as follows:
-
Start the Ollama server by running the following code:
ollama_thread = threading.Thread(target=ollama) ollama_thread.start()
-
Run the Ollama model of your choice. For example, to use the
mistral
model, execute:! ollama run mistral
After seeing this message
Send a message (/? for help)
, stop the execution and proceed to the next step. -
Now you need to start the Ollama server again by running the following code:
ollama_thread = threading.Thread(target=ollama) ollama_thread.start()
-
Now, you can interact with Ollama by sending prompts and receiving responses. Here's an example:
prompt = """ What is AI? Can you explain in three paragraphs? """
-
Then, run the following code to receive the response based on your prompt. Here,
stream
is set toFalse
, but you can also consider a streaming approach for continuous response printing:url = 'http://localhost:11434/api/chat' payload = { "model": "mistral", "temperature": 0.6, "stream": False, "messages": [ {"role": "system", "content": "You are an AI assistant!"}, {"role": "user", "content": prompt} ] } response = requests.post(url, json=payload) message_str = response.content.decode('utf-8') message_dict = json.loads(message_str) print(message_dict['message']['content'])
This will send the prompt to the Ollama model and print its response.
This content is licensed under the MIT License - see the LICENSE file for details.
If you find it helpful, consider supporting us in the following ways:
-
⭐ Star this repository on GitHub.
-
🐦 Follow us on X (Twitter): @AITwinMinds
-
📣 Join our Telegram Channel: AITwinMinds for discussions and announcements.
-
🎥 Subscribe to our YouTube Channel: AITwinMinds for video tutorials and updates.
-
📸 Follow us on Instagram: @AITwinMinds
Don't forget to share it with your friends!
For any inquiries, please contact us at AITwinMinds@gmail.com.