MinnDevelopment/discord-webhooks

[Question] How does the lib handle rate limits

Andre601 opened this issue · 3 comments

I'm curious to know how exactly this library handles rate limits. By that I mean the following questions:

  • How does it set/obtain rate limit information. Is it a hard-coded value it gets from somewhere in the lib itself? Is it retrieved from something like a response header from discord?
  • How exactly does it handle new requests when the current rate limit is used up? Simply queue them for future sending? Print warns?

I'm asking this because a software I use (A plugin) uses this library. And whenever there is a large number of text to be send through a webhook is the WebhookClient encountering 429 errors and prints those in the console:

errors

According to the dev, and from my own checks, is the library used as it should. A single WebhookClient instance is created and used for a single webhook without recreating it during the entire use.

What could be possible causes for this random issue?

Rate-Limits are handled here:

private synchronized void update0(Response response) throws IOException {
final long current = System.currentTimeMillis();
final boolean is429 = response.code() == RATE_LIMIT_CODE;
final String remainingHeader = response.header("X-RateLimit-Remaining");
final String limitHeader = response.header("X-RateLimit-Limit");
final String resetHeader = response.header("X-RateLimit-Reset-After");
if (is429) {
handleRatelimit(response, current);
return;
}
else if (remainingHeader == null || limitHeader == null || resetHeader == null) {
LOG.debug("Failed to update buckets due to missing headers in response with code: {} and headers: \n{}",
response.code(), response.headers());
return;
}
remainingUses = Integer.parseInt(remainingHeader);
limit = Integer.parseInt(limitHeader);
final long reset = (long) Math.ceil(Double.parseDouble(resetHeader)); // relative seconds
final long delay = reset * 1000;
resetTime = current + delay;
}

When a rate-limit is hit, the requests are scheduled here:

protected void backoffQueue() {
long delay = bucket.retryAfter();
if (delay > 0)
LOG.debug("Backing off queue for {}", delay);
pool.schedule(this::drainQueue, delay, TimeUnit.MILLISECONDS);
}

This happens both on new requests and queued requests.

What you might be observing here is a sub-resource limit.

What you might be observing here is a sub-resource limit.

Sub-resource limit? Never heard of that before. Any way to prevent such an issue?

There is no way to prevent sub-resource limits. You just have to slow down requests.