wingedrasengan927/medium-ai

Local LLM

Opened this issue · 3 comments

Is there a way to run this using a local LLM rather than open-ai api? Could I power this with a self-hosted llama 30B model?

Not currently but I'm working on it

Well once you have that feature in or the ability to connect to something like oogabooga I'll be pretty excited. :) There are a few other features that I think might work really well but I'll wait to see how this goes first.

sure thanks