EPIC: Job Dependencies
Closed this issue · 2 comments
Motivation
Jobs can have certain dependencies and should run once a job is successfully finished (or faulted). While it is currently possible to achieve that via the IJobNotificationHandler
interface it is rather cumbersome.
Basically we could allow two versions:
1. Dependency of a job independent of parameter - just depending on the job type
builder.Services.AddNCronJob(options => options.AddCronJob<Job>(...)
.When(
success => success.RunJob<SuccessJob>(p => p.WithParameter("foo")),
failure => failure.RunJob<FailureJob>())
);
So even if we have AddCronJob<Job>
with 12 different CRON jobs and or instant trigger, we are calling the same jobs for success or failure job.
2. The specific job (with parameter)
builder.Services.AddNCronJob(options => options.AddCronJob<Job>(
p => p.WithCronJob("0 * * * *").WithParameter("bar")
.When(
success => success.RunJob<SuccessJob>(p => p.WithParameter("foo")),
failure => failure.RunJob<FailureJob>())
);
So a specific JobDefinition will define the successor. Different CRON jobs can have different success and failures.
General Questions
- How deep do we want to nest this? Is it enough to define the first layer? So if we go with approach one:
flowchart LR
Job1 -- Success --- Job2
Job1 -- Faulted --- Job3
Which would be the given example - if a user want to declare success/faulted for Job2
or Job3
he would need to call AddJob<Job2>().When(...)
.
We could also directly offer a recursive builder to do this in one go:
flowchart LR
Job1 -- Success --- Job2
Job1 -- Failure --- Job3
Job2 -- Success --- Job4
Job2 -- Failure --- Job5
Job4 -- Success --- Job6
While this might be "cool," it clutters up a lot of code.
- Do we want to pass in the
JobExecutionContext
fromJob1
toJob2
orJob3
or pass in a completely new one?
If so: What happens if the user specifies aJobExecutionContext
and a parameter? Shall we have a new property on top ofJobExecutionContext
?
For the initial feature, I would go with approach 1 and only one level of Dependency. So a Job only knows its direct successor, not a recursive list.
@falvarez1 Would love to have your thoughts on this one.
@linkdotnet I like the concept of a fluent API that allows you to add behaviors based on success/failures. I do have several thoughts on this 'job chaining' approach. Primarily, I see this flow as more than just a job with behavior that triggers another job; it's essentially a workflow or pipeline. Implementing this type of enhancement impacts several areas, including state management and how jobs are visualized in a dashboard.
I think it's important to distinguish this concept from the concept of a job. This distinction can help in organizing code, improving readability, and potentially optimizing the execution of jobs and workflows. You can add the first approach easily but you'll hit a roadblock once you try to add more layers of control flow (like retries, fan-out and fan-in sequence and other more dynamic sequences). Regarding the second approach, I think this can get ugly really quickly, especially when trying to prevent infinite loops. You would need to pay special attention to preventing configurations that could be non-deterministic as those make it impossible to determine if these would cycle.
I'll post back with a more detailed analysis after I've organized my thoughts further.
Sure thing - keep in mind that I aim for a more pragmatic and simple solution that may or may not evolve our times.
That is why there is already an open (draft) PR