Any plans adding support for llama2.cpp?
Opened this issue · 1 comments
ivandir commented
It would be better to run code-llama2 locally
bsilverthorn commented
I agree that it would be interesting to try other models, especially local models. Happy to accept patches. I won't have time to implement this myself in the near future, though.