awaescher/OllamaSharp

How to connect with the knowledge base

anan1213095357 opened this issue · 4 comments

How to connect with the knowledge base

Maybe you should try LangChain or Semantic Kernel, or learn how Retrieval Augmented Generation (RAG) works and implement the search yourself using OllamaApiClient.GenerateEmbeddings and some vector database.

Maybe you should try LangChain or Semantic Kernel, or learn how Retrieval Augmented Generation (RAG) works and implement the search yourself using OllamaApiClient.GenerateEmbeddings and some vector database.

I understand that retrieval vectors can be saved, but how can the vectors obtained through GenerateEmbedding be output as text?

I understand that retrieval vectors can be saved, but how can the vectors obtained through GenerateEmbedding be output as text?

It doesn't seem to work like that way, the vectors resulting from Embedding are only used to compare similarity distances. You may need to make the association between the vectors and the original text yourself. Vector databases can usually have text or id attached to them.
https://platform.openai.com/docs/tutorials/web-qa-embeddings

This is not in the context of the Ollama API alone, so it's out of range of this project. @mili-tan is right, you should use langchain or semantic kernel for this.
There are tons of articles out there, I think this one is a good one to start: