AbanteAI/mentat

[Suggestion] Using Exponential Backoff to avoid LLM Rate Limit Errors

gssakash-SxT opened this issue · 0 comments

I'm probably over-thinking this since the current models are a lot more powerful but I was wondering if it'd be worth it to use Exponential backoff here to avoid having Mentat automatically stop in the case of a API rate limit from the LLM in use over here?

The tenacity python library as recommended by OpenAI in one of their cook books can be used to achieve this very easily but I wanted to know if it solves the problem and your guys's thoughts.