This is a PHP library for Ollama. Ollama is an open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine. It acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience.
Whether you use this project, have learned something from it, or just like it, please consider supporting it by buying me a coffee, so I can dedicate more time on open-source projects like this :)
You can find Official Ollama documentation here.
First, install Ollama PHP via the Composer package manager:
composer require ardagnsrn/ollama-php
Then, you can create a new Ollama client instance:
// with default base URL
$client = \ArdaGnsrn\Ollama\Ollama::client();
// or with custom base URL
$client = \ArdaGnsrn\Ollama\Ollama::client('http://localhost:11434');
Generate a response for a given prompt with a provided model.
$completions = $client->completions()->create([
'model' => 'llama3.1',
'prompt' => 'Once upon a time',
]);
$completions->response; // '...in a land far, far away...'
$response->toArray(); // ['model' => 'llama3.1', 'response' => '...in a land far, far away...', ...]
Generate a response for a given prompt with a provided model and stream the response.
$completions = $client->completions()->createStreamed([
'model' => 'llama3.1',
'prompt' => 'Once upon a time',
]);
foreach ($completions as $completion) {
echo $completion->response;
}
// 1. Iteration: '...in'
// 2. Iteration: ' a'
// 3. Iteration: ' land'
// 4. Iteration: ' far,'
// ...
Generate a response for a given prompt with a provided model.
$response = $client->chat()->create([
'model' => 'llama3.1',
'messages' => [
['role' => 'system', 'content' => 'You are a llama.'],
['role' => 'user', 'content' => 'Hello!'],
['role' => 'assistant', 'content' => 'Hi! How can I help you today?'],
['role' => 'user', 'content' => 'I need help with my taxes.'],
],
]);
$response->message->content; // 'Ah, taxes... *chew chew* Hmm, not really sure how to help with that.'
$response->toArray(); // ['model' => 'llama3.1', 'message' => ['role' => 'assistant', 'content' => 'Ah, taxes...'], ...]
Also, you can use the tools
parameter to provide custom functions to the chat. tools
parameter can not be used
with createStreamed
method.
$response = $client->chat()->create([
'model' => 'llama3.1',
'messages' => [
['role' => 'user', 'content' => 'What is the weather today in Paris?'],
],
'tools' => [
[
'type' => 'function',
'function' => [
'name' => 'get_current_weather',
'description' => 'Get the current weather',
'parameters' => [
'type' => 'object',
'properties' => [
'location' => [
'type' => 'string',
'description' => 'The location to get the weather for, e.g. San Francisco, CA',
],
'format' => [
'type' => 'string',
'description' => 'The location to get the weather for, e.g. San Francisco, CA',
'enum' => ['celsius', 'fahrenheit']
],
],
'required' => ['location', 'format'],
],
],
]
]
]);
$toolCall = $response->message->toolCalls[0];
$toolCall->function->name; // 'get_current_weather'
$toolCall->function->arguments; // ['location' => 'Paris', 'format' => 'celsius']
$response->toArray(); // ['model' => 'llama3.1', 'message' => ['role' => 'assistant', 'toolCalls' => [...]], ...]
Generate a response for a given prompt with a provided model and stream the response.
$responses = $client->chat()->createStreamed([
'model' => 'llama3.1',
'messages' => [
['role' => 'system', 'content' => 'You are a llama.'],
['role' => 'user', 'content' => 'Hello!'],
['role' => 'assistant', 'content' => 'Hi! How can I help you today?'],
['role' => 'user', 'content' => 'I need help with my taxes.'],
],
]);
foreach ($responses as $response) {
echo $response->message->content;
}
// 1. Iteration: 'Ah,'
// 2. Iteration: ' taxes'
// 3. Iteration: '... '
// 4. Iteration: ' *chew,'
// ...
List all available models.
$response = $client->models()->list();
$response->toArray(); // ['models' => [['name' => 'llama3.1', ...], ['name' => 'llama3.1:80b', ...], ...]]
Show details of a specific model.
$response = $client->models()->show('llama3.1');
$response->toArray(); // ['modelfile' => '...', 'parameters' => '...', 'template' => '...']
Create a new model.
$response = $client->models()->create([
'name' => 'mario',
'modelfile' => "FROM llama3.1\nSYSTEM You are mario from Super Mario Bros."
]);
$response->status; // 'success'
Create a new model and stream the response.
$responses = $client->models()->createStreamed([
'name' => 'mario',
'modelfile' => "FROM llama3.1\nSYSTEM You are mario from Super Mario Bros."
]);
foreach ($responses as $response) {
echo $response->status;
}
Copy an existing model.
$client->models()->copy('llama3.1', 'llama3.2'); // bool
Delete a model.
$client->models()->delete('mario'); // bool
Pull a model from the Ollama server.
$response = $client->models()->pull('llama3.1');
$response->toArray() // ['status' => 'downloading digestname', 'digest' => 'digestname', 'total' => 2142590208, 'completed' => 241970]
Pull a model from the Ollama server and stream the response.
$responses = $client->models()->pullStreamed('llama3.1');
foreach ($responses as $response) {
echo $response->status;
}
Push a model to the Ollama server.
$response = $client->models()->push('llama3.1');
$response->toArray() // ['status' => 'uploading digestname', 'digest' => 'digestname', 'total' => 2142590208]
Push a model to the Ollama server and stream the response.
$responses = $client->models()->pushStreamed('llama3.1');
foreach ($responses as $response) {
echo $response->status;
}
List all running models.
$response = $client->models()->runningList();
$response->toArray(); // ['models' => [['name' => 'llama3.1', ...], ['name' => 'llama3.1:80b', ...], ...]]
Check if a blob exists.
$client->blobs()->exists('blobname'); // bool
Create a new blob.
$client->blobs()->create('blobname'); // bool
Generate an embedding for a given text with a provided model.
$response = $client->embed()->create([
'model' => 'llama3.1',
'input' => [
"Why is the sky blue?",
]
]);
$response->toArray(); // ['model' => 'llama3.1', 'embedding' => [0.1, 0.2, ...], ...]
composer test
Please see CHANGELOG for more information on what has changed recently.
Please see CONTRIBUTING for details.
The MIT License (MIT). Please see License File for more information.