yoheinakajima/prettygraph

use local llama models instead of openai API?

Opened this issue · 1 comments

can we use local llama models ?

Yes, You need to change ~line 55 in main.py to, result = generate_text_completion("ollama/llama3.1").
Check LiteLLM Docs for more info.