miurla/morphic

Support Ollama AI Provider

miurla opened this issue · 6 comments

Update (9/30): #215 (comment)

Overview

Currently, Ollama support is an unstable and experimental feature. This feature is implemented using the Ollama AI Provider. It is explicitly stated that it is unstable in Object generation and Tool usage. Additionally, Tool streaming is not supported. Morphic is very unstable because it requires these capabilities. Please use it with the understanding of these limitations.

Environment Variables

  • OLLAMA_MODEL=[YOUR_OLLAMA_MODEL]
    • The main model to use. Recommended: mistral or openherms
    • Object generation, Tool usage
  • OLLAMA_SUB_MODEL=[YOUR_OLLAMA_SUB_MODEL]
    • The sub model to use. Recommended: phi3 or llama3
    • Text generation
  • OLLAMA_BASE_URL=[YOUR_OLLAMA_URL]
    • The base URL to use. e.g. http://localhost:11434

PR

@miurla Ollama now supports tool call - https://ollama.com/blog/tool-support

Ollama AI Provider needs to support tool calls: sgomez/ollama-ai-provider#11

Looks like we may get that soon....fingers crossed 🤞

It seems that v0.11 has been released and it has tool support. I haven't tested it yet but I guess I'll try it soon.

https://github.com/sgomez/ollama-ai-provider/releases/tag/ollama-ai-provider%400.11.0

Ollama's tools do not support streaming yet.

Ollama tooling does not support it in streams, but this provider can detect tool responses.
https://github.com/sgomez/ollama-ai-provider?tab=readme-ov-file#tool-streaming

I tested it with the llama3.1 model, but the Researcher did not work. We need to either not support streaming or manually detect tool-calls and respond.

Test code: #294

image

Update

Object generation and Tool usage with Ollama are now working stably, so we have implemented support for them. Ollama currently does not support tool streaming. Therefore, there is a waiting time until the answer is generated.

Model

  • qwen2.5

Currently, the only supported model is qwen2.5

PR

#352