bsilverthorn/maccarone

Any plans adding support for llama2.cpp?

Opened this issue · 1 comments

It would be better to run code-llama2 locally

I agree that it would be interesting to try other models, especially local models. Happy to accept patches. I won't have time to implement this myself in the near future, though.