jybaek/llm-with-slack

Document how to configure to use a self hosted ollama llm

GregHilston opened this issue · 0 comments

For example, say I'm running ollama on the same machine as this container. Or even another machine, how can J configure this project to use that llm? And skip using Claude and chat gpt