another dynamic limit need
diemus opened this issue · 9 comments
I have multiple resources that are being requested, each with varying capacities. Therefore, I need to implement rate limiting based on the capacity of each individual resource.
This can be easily implemented. Example of the Gin framework
// 5-S
middleware1 := mgin.NewMiddleware(limiter.New(store, rate))
// 100-S
middleware2 := mgin.NewMiddleware(limiter.New(store, rate))
router.Use(func(ctx *gin.Context) {
if(ctx.Request.URL == "/api/v1") {
middleware1(ctx)
return
}
if(ctx.Request.URL == "/api/v2") {
middleware2(ctx)
return
}
})
thanks for the example, but what if some user want higher quotas for the same resource? should i generate all possible middleware ?
This depends on the specific business. Below is pseudocode provided as a reference:
var cache sync.Map
router.Use(func(ctx *gin.Context) {
uid := getUserID(ctx)
if v,ok := cache.Load(uid); ok {
h := v.(gin.HandlerFunc)
h(ctx)
return
}
rate := genRateByUserID(uid)
h = mgin.NewMiddleware(limiter.New(store, rate))
cache.Store(uid, h)
h(ctx)
})
Please pay attention to the store used by the limiter. If you are using Redis for storage, you will need to use different prefixes. However, if you are using in-memory storage, you should not encounter any issues.
Thank you for this very nice library. I was wondering, performance wise, how scalable is this approach? For let's say 10k users. My current understanding is that my program would need to store all the middleware in memory.
Regarding the performance and scalability of this approach, I haven't personally tested it. However, using Redis as a middleware storage can be a viable option for achieving scalability.
Thank you for your answer. Indeed that could be interesting. Gonna try with the local approach one and then see how well it scales.