Why not call OpenAI?
krrishdholakia opened this issue · 4 comments
Hey @srush,
saw that you're using manifest for making OpenAI/LLM calls vs. calling it yourself - why is that?
MiniChain/minichain/backend.py
Line 206 in b79ebc5
context: I'm working on LiteLLM an abstraction library to simplify LLM API calls
No good reason, I think it didn't when I wrote this.
Would love to remove the LLM API and caching layer entirely from minichain if possible. If your library does that, I would switch.
that's great! We already do handle the llm api calling - including streaming, i'm assuming that's the one you're trying to replace? Will make a PR for it
Last q - why don't you want to manage this yourself?
Last q - why don't you want to manage this yourself?
Minichain is currently 2 files but I would like it to be 1 file 🥇
More though because openai keeps changing up the API, and it would be nice not to have to worry about that.
The main features I need are call, embeds, streaming, and stop keywords. In theory might be nice to have caching and some of the more advanced features like Function Call API. Finally, would love to eventually support offline models through something like TGI.