brianpetro/obsidian-smart-connections

Context is included late when using Ollama

Opened this issue · 0 comments

I'm using Ollama with the Custom Local (Open AI format) config

context seems to be sent after the question is asked
I can ask a question and it will pull from context, but it seems to include the context after the first message so I have to paste the message twice after its first reply to make use of it
It doesn't seem to wait until the context is collected to send the message, essentially.

I use llama3.2 but seems to be the same with all models i've tried