Add solution for rate limits
Opened this issue · 3 comments
Airtable mentions in the docs that it limits API calls:
The API is limited to 5 requests per second. If you exceed this rate, you will receive a 429 status code and will need to wait 30 seconds before subsequent requests will succeed.
If you need a higher request rate, they suggest either using retry logic or a caching proxy.
I think it'd be easy enough to add a caching layer, especially seeing simple implementations like the micro-open-graph
use of memory-cache
. Maybe just cache results for 30 seconds by default, configurable via env variable?
For reads, the caching approach makes the most sense to me as well. Obviously you could still hit those limits by having many concurrent users, but it would greatly lessen the chances. Also, if you're using Airtable for some mission-critical app that has more than 5 users/second...well, you may want to rethink your choices.
Writes are trickier. I've never implemented something like this, but you could queue all writes up in memory and if you get a rate limit failure, wait 30s while continuing to queue up writes and return 202 status codes. Obviously, you could still run into problems but this would at least give you some cushion if you had a sudden increase in signups or something.
I'm inclined to hold off on thinking about write-side caching. If, as you say, we'd want a queue, it'd likely require external systems to manage that queue. A little too crazy for a simple tool like this.
Open to a caching PR for reads though :)
I agree. I'm definitely thinking about read caching though, so I'll see if I can come up with something.