What are the limits of how many scans our system can run and how are we alerting users when we reach those limits?
Closed this issue · 3 comments
Hoping we can resolve this question async and discuss answers on our Monday (August 26) call.
With #446, we started to run into the limits of our scan. We don't (yet) automatically spin up workers to handle larger scan loads. This means that users can crash our api.equalify
or run huge backlogs on scan.equalify
.
Until we start spinning up workers, we need to establish a upper limit that we support.
Correct me if I'm wrong, @azdak and @heythisischris, but we probably should block a user from submitting scans if:
x
number of scans are currently running atscan.equalify.app
y
number of scans are being processed byapi.equalify.app
If that's right, what is x
and y
and how are we informing the user via the API and dashboard if those limits are being hit?
It would be good establish blocks then scale up from there, so we can tell users what to expect. I think?
Well- just from a scan standpoint, there really shouldn't be any hard limits on number of scans, it's just down to how long it'll take until scan gets around to actually processing them (at least until Redis and the server gets overloaded, which I think would be in the millions of queued scans).
I think there's a separate, system-wide business question of whether we want to cap scans to a certain number monthly or whatever, to alleviate scan congestion issues.
Yeah, @azdak. This is something we want to do so we can communicate with clients.
This is stupid. If we limit, we're going to have dropped data and things like that. We'll need to approach this a different way.