soulsands/trilium-chat

Ollama instructions are really unclear

Opened this issue · 3 comments

Having issues integrating my locally hosted api of llama3. I followed the instructions, yet it keeps prompting me to provide an api key for chatgpt. Not sure what more needs to be modified fro ollama to work, but I see that the requests arent even leaving the chat menu, so it seems to be an if check somewhere blocking it. Seems like just an oversight, or it is mostly in but not done.

Any help would be appreciated.

Hey @JonnyDeates, here's what my Chat Options looks like:

{
	"viewWidth": 364,
	"engine": "ChatGpt",
	"apiKey": "asdfasdfasdfasdfasdf",
	"requestUrls": {
		"completion": "https://ollama.internal.network/api/chat"
	},
	"engineOptions": {
		"model": "llama3",
		"max_tokens": 2500,
		"temperature": 0.3,
		"top_p": 1,
		"presence_penalty": 0.5,
		"frequency_penalty": 0.5,
		"stream": false,
		"n": 1
	},
	"shortcut": {
		"toggle": "Alt+Q",
		"hide": "Esc"
	},
	"faces": [
		"bx-smile",
		"bx-wink-smile",
		"bx-face",
		"bx-happy-alt",
		"bx-cool",
		"bx-laugh",
		"bx-upside-down"
	],
	"colors": [
		"var(--muted-text-color)"
	],
	"autoSave": true,
	"systemPrompt": "",
	"checkUpdates": true
}

What does yours look like currently? What does the Network tab within your "inspect" look like? Are you able to show any example requests?

But yes, this is mainly just an (ab)use of the existing code to make it think it's talking to ChatGPT using OpenAI's API as the existing codebase is very tightly coupled to it. If someone wants to integrate a new Ollama engine, a PR would certainly be accepted!

I've also updated the README with the above information, and an Nginx configuration block that may be required so that Ollama accepts the "Authorization" header.

Created #20 to talk about integrating Ollama better.