Opened this issue 3 months ago · 1 comments
can we use local llama models ?
Yes, You need to change ~line 55 in main.py to, result = generate_text_completion("ollama/llama3.1"). Check LiteLLM Docs for more info.