linkedin/parseq

Support Parseq plan local cache to reduce fanout calls.

mchen07 opened this issue · 6 comments

Sometimes application make duplicate downstream calls when serving a request. One way is to make the Task sharable and pass it around, but it’s sometimes hard to refactor the code while maintaining code readability. It would be great that parseq can provide some cache capability to avoid duplicate calls.

Some idea may be: ParSeq can provide a plan-local cache <K, Task> where K is user-defined key and Task computes result. It would make sense to implement it on a ParSeq level because:

  1. ParSeq has a well defiled Plan life-cycle with hook when Plan completes (even after response has been sent to the client)
  2. Natural API for asynchronous computation which is a most common use case for a cache (we have seen similar feature requests for Rest.li request results)

This doesn't have to be a cache but could be a client supplied unique identifier, for each task, which Parseq can use before executing the plan and can de-dupe tasks. Clients should be responsible for providing unique identifier for non-identical tasks and unique ones for identical tasks. This will provide clients more control on the behaviour. Adding cache would also require client to have control of the cache parameters as it can affect the machine it's running on.

@hiteshsharma I don't quite get your comments. In this proposed approach, ParSeq is maintaining the cache and ask for clients to provide cache key (as you indicated above unique identifier) K.

@mchen07 later I realized a cache would anyways be required. Pardon my limited knowledge of Parseq. I'm really looking forward to this change. If this is being picked up I can also provide some help with some sub-tasks if needed.

Does it already exist in parseq-rest client already?

No, I don't think ParSeq-rest-client has cache built in. @karthikbalasub @junchuanwang Do we have any plans to work on this in the near future?

No, I don't think ParSeq-rest-client has cache built in. @karthikbalasub @junchuanwang Do we have any plans to work on this in the near future?

@mchen07 We just discussed yesterday and from my understanding we are not going to work on this at least this quarter.