Opened this issue a year ago · 0 comments
If support local llm api, we can test them. And more faster...