PullModelAndEnsureSuccessAsync gives a '500 Internal Server Error' from the ollama api
Closed this issue · 0 comments
lrolvink commented
Describe the bug
Hi,
Started using LangChain but I ran into a problem with my own models in combination with ollama.
As long as you use known models, the code works, but as soon as you use a self-made model, a 500 Internal Server Error is returned with 'llm.GenerateAsync("Hi!")'.
Steps to reproduce the bug
- Having a running ollama server.
- Executing the following snippet:
var embeddingModel = new OllamaEmbeddingModel(provider, id: "all-minilm");
var llm = new OllamaChatModel(provider, id: "mycustommodel");
Console.WriteLine($"LLM answer: {await llm.GenerateAsync("Hi!").ConfigureAwait(false)}");
Expected behavior
It should be a option to allow pulling a model. By commenting out the LangChain.Providers.Ollama,GenerateAsync()
//await Provider.Api.Models.PullModelAndEnsureSuccessAsync(Id, cancellationToken: cancellationToken).ConfigureAwait(false);
solves my isssue.
Screenshots
No response
NuGet package version
No response
Additional context
http:
POST /api/pull HTTP/1.1
Host: 172.28.219.196:11434
Content-Type: application/json; charset=utf-8
{"model":"mycustommodel","insecure":false,"stream":false}
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Date: Tue, 18 Jun 2024 17:45:35 GMT
Content-Length: 52
{"error":"pull model manifest: file does not exist"}