ujenjt/call-rate-limiter

Feature request: different strategies for managing overflows

e-kolpakov opened this issue · 2 comments

This means if you call rateLimitedFunc 150 times and only 100 can be called in time frame, the next 50 calls will be postponed and executed later to respect given rate limits.

Might make sense to have an ability to choose the overflow management strategy, such as:

  1. Postpone and keep all (current one)
  2. Postpone and drop new - keep up to X (configurable) oldest requests
  3. Postpone and drop old - keep up to X (configurable) newest requests
  4. Backpressure? (reactive streams)
  5. Fail - throw an exception/fail the promise if rate limiting is hit right away.

Inspired/related: https://doc.akka.io/docs/akka/2.5/stream/stream-rate.html#buffers-in-akka-streams

We definitely hadn't wanted to fail, but all the options and use-cases seem more than valid. Thanks for pointing them out!

Hey, @e-kolpakov thanks for the idea.

We've built this package around the idea, that we have several functions that works with same rate-limited resource. After that we wrap this functions with rateLimit module and work with wrapped functions. If some thrid-party code tries overcome the limits by calling that functions too frequent it should wait for a while.

Imagine we have a microservice that process crypto payments with his own queue and it uses some thrid-party rate-limited API to actual interact with a blockchain. Rate limit in this case might be like 100 requests per minute. So it's ok to wait several additional minutes and execute transaction after that, and dropping the tx is not ok.

This is use case for "Postpone and keep all" strategy, could you please give some use cases for another strategies? It's easier to think about new feature with particular examples.