Configure with local LLM?
uninstallit opened this issue · 1 comments
uninstallit commented
Is it possible to configure with local llm - i.e.: Ollama or LlamaCpp servers?
Some code cannot be sent to open apis.
Thanks
LeonOstrez commented
Hey @uninstallit sorry, we are not maintaining this project anymore. Our full focus is on GPT Pilot at the moment