Kubernetes CRD facade
kenazk opened this issue · 12 comments
Is your feature request related to a problem? Please describe.
I want to execute a Lyra Workflow as a Kubernetes CRD so that I can leverage all my native Kubernetes tooling (i.e. kubectl, helm) to also deploy infrastructure and workflows.
Describe the solution you'd like
I can package up my workflow -> Submit the package to Lyra which registers it as a Kubernetes CRD -> Deploy/update/delete custom resource with kubectl/helm etc.
The Lyra controller should leverage the same library as the CLI and listen for Workflow
CRDs in Kubernetes. It should provide the following operations:
- Deploy
- Upgrade
- Delete
- Get
- List
Describe alternatives you've considered
helm plugin
Additional context
User experience should have the following steps:
- Author Workflow
- Package Workflow
- Register Workflow with Kubernetes
- Deploy Workflow with Kubernetes tooling (i.e. kubectl)
@kenazk I'm not a fan of providing two paths to accomplish the same thing, which is what #48 and #49 describe. Does is make more sense to tightly control everything about deploying the controller (and whatever other pieces are required) or leave it an exercise to the user to fiddle with Helm? I prefer the former as it is simpler for end users (though more complex for us? Idk)
Good questions @jdwelch. I think it goes back to finalizing the user experience for the Operator surfacing Workflow through K8s. If it's something like:
- Download Lyra
- Develop a Workflow (i.e. author, test, repeat)
- Get Lyra running in K8s
- Register the workflow
We know that for (1), (2), these are happening on a CLI interface on a local machine. That feels pretty good. Plus, we know that for (4), the user is interfacing through a CLI to a K8s-managed Lyra controller.
Given the above, seems like keeping (3) within the Lyra CLI workflow seems optimal (i.e. deploying the controller through the CLI).
That said, if we decide to go with that approach, one downside is now the state reconciliation referenced by #57. If you have a bunch of resources deployed from your local workstation (i.e. identity.db is on disk) and you would like that to be in sync with a controller-managed Lyra, then Lyra will need to support talking to a remote state storage.
On the other hand, we don't necessarily have to think of them as a coupled experience if:
- remote state storage is supported
- remote "repo" for workflow packages is supported
If we have a helm-repo like thing, then the user experience is:
- Download Lyra
- Develop a Workflow
- Publish Workflow to repo
- Deploy Lyra helm chart, talking to the same repo.
Then, it makes more sense to not include deploying controller as part of CLI.
Ah, OK. In that second case, is the idea that the Lyra controller gets deployed as dependency for the workflow?
That's one way for sure (assuming the Workflow CRD is deployed with Helm). I was more just thinking that step (4) would be something like helm install lyracontroller
. Step (5) would be kubectl create -f worklow.yaml
.
OK, sweet, that's what I was thinking for 4 as well 👍
Not especially. I suppose for now whatever's easier
Updating after a few more weeks of discussion.
We've solved the problem as described, sort of. You can deploy Workflow resources using standard tooling, but those Workflow resources don't contain the actual workflow. The workflow is assumed to be external to Kubernetes (i.e. a file in Lyra's load path) and is referenced in the Kube resource by name.
We're considering some options around embedding the workflow itself into the Kubernete Workflow resource perhaps by:
- Supporting Lyra yaml directly in the resource which would be human consumable but supports only a subset of Lyra capabilities (actions cannot be expressed in yaml)
- Using tooling to encode any arbitrary workflow in some opaque format that can be consumed by Lyra which would expose all Lyra functionality but would not be human-friendly
Beyond that, packaging up Lyra workflows as described (which is the "tooling" referred to by the paragraph above) is one of several possible approaches to the problem. We want to support a development model where a person designs/develops/tests a workflow then makes it available for use from within Kube. We don't necessarily need to embed workflows in Kubernetes resources to make this happen. Our current functionality, where Kube holds a "pointer" to the workflow would work for this use case. We could deploy code via git repos like Puppet's r10k/code management. Or via upload to a service we run.
There's a final consideration around CRDs and whether we use a one or two "dynamic" CRDs (like Workflow) that can be used to represent many different kinds of workflow or resource or whatever, or have a separate "static" CRD for every type of thing we want to managed (like "workflow A" or "VPC"). Whatever we end up doing I think we have this choice to make. Registering large numbers of CRDs is unattractive but gives potentially gives a more native feeling experience.
We're at a bit of a stopping point for this work - will keep track of the future features described (especially packaging workflows, which isn't solved yet) but closing for now.