Local LLM
Opened this issue · 3 comments
xvishon commented
Is there a way to run this using a local LLM rather than open-ai api? Could I power this with a self-hosted llama 30B model?
wingedrasengan927 commented
Not currently but I'm working on it
xvishon commented
Well once you have that feature in or the ability to connect to something like oogabooga I'll be pretty excited. :) There are a few other features that I think might work really well but I'll wait to see how this goes first.
wingedrasengan927 commented
sure thanks