hinterdupfinger/obsidian-ollama

Feature request: incorporate index/vector database support from Obsidian into ollama

Opened this issue · 10 comments

I came across this blog posting from the ollama team and it seemed very interesting. It could be quite helpful in getting additional context from Obsidian into the model for the output from the ollama via the plugin.

https://ollama.ai/blog/llms-in-obsidian

100% I just read the same article looking into this. This would be a game changer.

Lamaindex needs a OpenAI key, I will try to cook up a POC with LangChain maybe (also asked on lamaindex if we can add ollama local model in meanwhile). (ref.: https://github.com/jmorganca/ollama/blob/main/docs/tutorials/langchainjs.md)

Hey guys, I am close to having a solution but it it currently slow/unfeasible in JS without OpenAI token, so do you think it would worth to have it as an extra service running next to ollama for now or is it a deal-breaker for you?

I would be running ollama as a remote service anyway, an additional service won't worry me. I'm more concerned with keeping everything local (offline). Looking at the flask server implementation you do appear to loose some of the flexibility of ollama tuning models via the Modelfile.

It is fully local, I plan to add of course options to save your indexes where you want (currently local stoarge folder).

I unfortunately do not understand the comment about local makefile, can you elaborate (or give advice?)

I don't think it would be an issue having it run alongside Ollama. I agree with the others re: it being private/running locally being the bigger deal.

It is fully local, I plan to add of course options to save your indexes where you want (currently local stoarge folder).

I unfortunately do not understand the comment about local makefile, can you elaborate (or give advice?)

Local Modelfile, see example: https://github.com/jmorganca/ollama/tree/main/examples/modelfile-sentiments

It gives you the ability to use a tuned model (e.g. prompts, temperature) quite easily with ollama.

@Irate-Walrus I understand that and I use it, but I do not understand how you propose to improve the PR? It is on the todo list to make it switchable alongside embedding:

Update the Python API to use better embedding / allow it to use different models / temperature

@brumik I missed that in the TODO, my bad. I do wonder whether it would be worth emulating the ollama API (https://github.com/jmorganca/ollama/blob/main/docs/api.md) even if it just does pass-through with the additional indexing endpoint. It's good work, I'm not sure how keen @hinterdupfinger would be to integrate something with a dependency outside of ollama itself.

Maybe then we could create a new plugin. To be fair I do not like that it is a new dependency but I see that you have to run LLM somewhere anyways. Also this work is different enough from the plugin to be included in different plugin.

I created an OK user interface, I will look into publishing it and meanwhile I will try to improve the python server to give better customization and some support documentation.