wagtail/wagtail-ai

Ollama backend

krzysztofjeziorny opened this issue · 4 comments

In the last Wagtail webinar @tomdyson mentioned that this project can use Ollama with the Llava model as a backend. Is it already possible or meant for a future release? I've been looking at the docs, but didn't find any examples.

Thanks for this interesting project!

I may try to answer myself - Ollama integration comes with llm and llm-ollama, as described under Using other models. I experienced errors, but that's for a start only :)

My test setup comes as follows:

WAGTAIL_AI = {
    "BACKENDS": {
        "default": {
            "CLASS": "wagtail_ai.ai.llm.LLMBackend",
            "CONFIG": {
                "MODEL_ID": "llama3:latest",
            },
        },
        "llava": {
            "CLASS": "wagtail_ai.ai.llm.LLMBackend",
            "CONFIG": {
                "MODEL_ID": "llava-llama3:latest",
                "TOKEN_LIMIT": 300,
            },
        },
    },
    "IMAGE_DESCRIPTION_BACKEND": "llava",
}
WAGTAILIMAGES_IMAGE_FORM_BASE = "wagtail_ai.forms.DescribeImageForm"

The image description couldn't be generated (NotImplementedError: This backend does not support image description).

The text editor gave away a longer error:

"The editor just crashed. Content has been reset to the last saved version."

Error: Minified React error #200; visit https://reactjs.org/docs/error-decoder.html?invariant=200 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.

Ku@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:176710
394/t.default@http://0.0.0.0:8088/static/wagtail_ai/draftail.js?v=0bdb60e2:1:7505
Yo@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:121584
Aa@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:130433
wl@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:168708
pu@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:160267
fu@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:160190
ru@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:157220
4448/Ki/<@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:108993
53/t.unstable_runWithPriority@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:205430
Bi@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:108702
Ki@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:108940
qi@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:108873
M@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:177628
Kt@http://0.0.0.0:8088/static/wagtailadmin/js/vendor.js?v=0bdb60e2:2:86200


    in Unknown
    in div
    in bt
    in div
    in Unknown
    in ForwardRef
    in Rt
    in Pt
    in zt
    in topToolbar
    in div
    in Lt
    in Pe
    in Unknown
    in Unknown
    in Se

Django 5.0.6
Wagtail 5.2.5
wagtail-ai 2.1.0
llm 0.14
llm-ollama 0.3.0

At the moment the image description functionality is

This feature is experimental and is currently only supported when using OpenAI as a provider

as per the release notes https://github.com/wagtail/wagtail-ai/releases/tag/v2.1.0

Can you check if the rich text editor integration works without IMAGE_DESCRIPTION_BACKEND and WAGTAILIMAGES_IMAGE_FORM_BASE ?

Setting just the default backend (with llama3:latest) works! Kind of: the completion prompt works, the correction one creates… also completion.

It seems the prompt takes the content of the whole text block as the input, not just the marked part, eg. a paragraph?

Crucial is to set the "TOKEN_LIMIT", otherwise the editor crashes.