how can Agenta be integrated with ollama LLM platform
kmx2 opened this issue · 1 comments
I need use Agenta completely on local server with a local LLM platform ollama, how can do that?
Hey @kmx2! I'm here to help you out with any issues or questions you have. Let's see how we can integrate Agenta with the local LLM platform Ollama.
To integrate Agenta with the Ollama LLM platform and use it completely on a local server, follow these steps:
-
Install the Agenta Python SDK and CLI:
pip install agenta
-
Set up the Agenta web platform:
- Clone the Agenta repository and navigate to the folder:
git clone https://github.com/Agenta-AI/agenta.git cd agenta
- Launch the Agenta server using Docker Compose:
docker compose -f "docker-compose.yml" up -d --build
- Verify the installation by opening your browser and going to http://localhost. If you see the Agenta web interface, you're good to go.
- Clone the Agenta repository and navigate to the folder:
-
Set up environment variables:
AGENTA_API_KEY
for cloud users (not needed for local server).AGENTA_HOST
set tohttp://localhost
if you are self-hosting.
-
Write a custom application using the Agenta SDK:
import agenta as ag import requests import json default_prompt = "Please write a joke about {subject}" url = "http://localhost:5000/generate" # Replace with your local Ollama LLM server URL ag.config.register_default(prompt=default_prompt, temperature=0.8) @ag.entrypoint def generate(subject: str) -> str: prompt = ag.config.prompt.format(subject=subject) data = { "prompt": prompt, "temperature": ag.config.temperature } response = requests.post(url, data=json.dumps(data)) return response.json()
This code sets up a simple LLM app that generates jokes about a given subject using the Ollama LLM platform hosted locally. The
ag.config.register_default
function sets the default values for the prompt and temperature. The@ag.entrypoint
decorator marks the function that will be called to generate the joke. -
Deploy the application using the Agenta CLI:
- Ensure that the Ollama LLM platform is running on your local server.
- Deploy the application by running the following commands in the code folder:
agenta init agenta variant serve app.py
By following these steps, you can integrate Agenta with the Ollama LLM platform and use it entirely on a local server [1][2][3].