Add a built-in, pluggable cache to the diffing server
danielballan opened this issue · 6 comments
Split off from #256. @Mr0grog:
Have a cache built in (probably via Redis) at this level, instead of requiring someone to build it in a proxy/translation layer in front of this service (downsides: more complexity in this service, locking people in to one or a few cache implementations)
As long as
dict
andfunctools.lru_cache
are available options, sure. I expect we can do this in such a way that people adding a new cache implementation is an easy lift for anyone who feels "locked in" by whatever we choose.
It just feels a little like, if this thing is almost always going to sit behind some other proxy anyway, caching might be better accomplished at that level.
Would this be a good fit for an nginx cache? On a call, @jsnshrmn mentioned having some experience with a specific method/technology around this, but I forget the jargon.
Sure… I guess I just mean that we have the API server sitting in front of this (where we already cache these diffs) and IA might also have a server sitting in front, so in practical terms, this might represent a lot of extra work or duplication for very little gain.
I’m kinda skeptical of an in-process cache (e.g. with functors.lru_cache
) here because this server is doing heavy work, so you are unlikely to only be running one. You’d ideally want all instances to share a cache, so having one in-process is not the right solution most (?) of the time.
You’d ideally want all instances to share a cache, so having one in-process is not the right solution most (?) of the time.
That's a very good point that didn't occur to me.
Just now circling back around to this now that I've got bandwidth for actual dev tasks and have my brain more wrapped around scanner's architecture. I don't think processing is the best place for caching to happen either. @danielballan okay to close this issue?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in seven days if no further activity occurs. If it should not be closed, please comment! Thank you for your contributions.