Proposal: helm commandeer
thomastaylor312 opened this issue ยท 40 comments
EDIT: This has been superseded by another idea
Problem
Many users want to be able to "adopt" currently running pods, services, etc. into a new or existing chart. One example of this is self-hosted Kubernetes control plane components (apiserver, DNS, etc.) after they have been bootstrapped by something like bootkube.
Proposed solution
I propose we add a new commandeer
(to stick with a naval theme, we can go with adopt
if people find it too confusing) subcommand to Helm. The reason behind this is we don't want to add additional complexity and flags to an already large flag set on install
, update
, and rollback
. The command would take a defined chart (e.g. helm commandeer <chart_name>
) with matching manifests and use these to "commandeer" the objects into a chart manifest. Future iterations of the feature could add the ability to pass specific names of resources you want to commandeer.
Behavior
The passed chart would be rendered like a normal install
or upgrade
before moving on to a "capture" step. The "capture" step would use the object GroupVersionKind
, namespace, and name to find a match in the already existing objects. For safety sake, if any of the rendered manifests do not exactly match a resource, the "capture" step will fail. There will also be an --override
flag and an --ignore-match
flag to help configure this behavior. The --override
flag will make it so as long as the Kind and name match, it will accept the match. The --ignore-match
flag will drop any non-matching manifest from the final chart.
The last step will be a merge patch of the new manifest with the existing one. Once these are merged and sent to the API, the new chart will be saved into storage. So the tl;dr is below
- Render chart
- Capture resources using
GroupVersionKind
, namespace, and name - Merge manifests
- Save new chart
The --into
flag
We also want to have the ability to adopt resources into a currently installed chart (e.g. bringing resources into an umbrella chart). This will be done with the --into
flag, which will take a release name as an argument. All of the same steps will be followed to adopt resources with the exception that existing resource will remain the same. This is a purely additive operation
Open questions
- Should the
--override
flag force resources in the chart that don't have a match to be created anyway? - Is
--ignore-match
something people will actually use? - Do we need something like a
--replace
flag where the new manifest completely replaces the existing one? - Is this too complex for a first pass and should we just stick with specifying resource names to commandeer?
Other notes
I will update this proposal with suggestions from the comments so as to not have multiple proposal versions clogging up the issue
How will commandeer
know which resources it would group? E.g. I have wordpress deployed, in the way the chart would deploy it - mariadb and wordpress deployments with some pvcs. I also have other mariadb deployments in a cluster. How will commandeer
know which mariadb to put in a release?
When you say "exactly match a resource," we need to be careful to distinguish between facets not specified in the chart, and those the chart mandates not be present. For example, most manifests won't specify any "status"-related details, nor various annotations contributed by system facilities. I assume that you want to match the following on most objects:
- Metadata
- namespace
- name
- labels
- some annotations
You'd want to ignore most of them not mentioned in the chart, right?
- Spec
Beyond that, would you tolerate whatever happens to be there on an existing object?
First off, I think this is a great feature, and I personally love the name "commandeer" (though maybe "adopt" is a little more consistent and easier for people like me to spell).
I am duty bound to ask the question I am notorious for: Should this be a plugin? ๐
Okay, with that out of the way....
Here's a different way we could approach the same issue. What if we generated the chart from the existing resources? I imagine it would work something like this:
$ helm commandeer --name=myNewChart deployment:www-frontend service:www-gateway configmap:www-config
Generated chart myNewChart
Added Deployment www-frontend
Added Service www-gateway
Added ConfigMap www-config
Chart myNewChart is released as myNewChart.v1
(syntax above could definitely be improved)
From there, the user would have a myNewChart
chart to work with to manage that resource, and also a release record for the current chart. Going forward, the user could:
- Edit the manifests in
myNewChart/templates
and turn them into templates - Rework the initially flat chart into subcharts
- Create a new values file, etc.
- And then do a
helm upgrade
to fully transform the release
I think this idea is sorta backwards from yours, so maybe it's too far off. I'm trying to figure out what the user's use-case looks like. Is it that they already have a chart that describes the existing stuff? Or is it that they want to take an existing thing and helmify it?
@devth , who once proposed doing something like this, might also have some ideas.
In any event, I like the proposal, and I know there are many who would benefit from this. So I'd be in favor of it.
@technosophos That was actually the other way I was thinking of doing it but I didn't write it all out. I should of, because now that I have seen it written out, it makes it seem like a better idea. I think it would be more simple to implement and less cognitive burden on the user.
I did think about the plugin way of going about this, but I have heard lots of different people mentioned this from time to time, so it would probably be valuable to have in core.
So anyway, let me rework the initial proposal over the weekend to incorporate your idea @technosophos and we can continue discussion. I'll try to see if we can get some other helm power users to comment on this
For my use case either of the 2 options would work as long as with the second option I don't have to use the chart generated. I'm mainly interested in bootstrapping kubernetes with something like https://github.com/kubernetes-incubator/bootkube for a self-hosted cluster, but being able to upgrade the manifests with a helm upgrade
So in my case I would have a handful of manifests that are written to disk somewhere, but I could like to be able to helm commander kube-config
and helm upgrade kube-config
to upgrade my controller-manager, api server, proxy etc.
There might be some learnings to be taken from here
https://github.com/appscode/chartify
Proposal take 2
Given the feedback thus far, I propose the following functionality:
Command line design
There will be two ways to select resources to commandeer:
- Comma delimited list of
<kind>:<name>
(e.g.Deployment:www-frontend
) - A label query that will be the exact same functionality as with
kubectl
(e.g.app in (foo, bar)
)
The default method will be using the <kind>:<name>
syntax and the label query will be exposed by command line flag (--label
or -l
). Users will be able to specify a combination of the two to select multiple resources.
The default behavior will be strict matching. For the label syntax this will be that it must match at least one resource. For <kind>:<name>
, each resource listed must match an existing resource.
Additional Flags
--ignore-missing
: This will allow for "non-strict" matching. Meaning as long as the label selectors or<kind>:<name>
list match at least one existing resource, it will create a release.--dry-run
: Same as on other commands--debug
: Similar to other commands. Will print out the created manifests to STDOUT--name
: The name of the new chart/release--output
: The file path to output the new chart to
How it will work
Tiller will look for matching resources and turn them into YAML manifests. It will also create a release with those objects. Once the release is complete, Helm will output the new chart into the current working directory or the one specified with --output
. It will create one file per resource it commandeers.
Then, as stated by @technosophos:
From there, the user would have a
myNewChart
chart to work with to manage that resource, and also a release record for the current chart. Going forward, the user could:
- Edit the manifests in myNewChart/templates and turn them into templates
- Rework the initially flat chart into subcharts
- Create a new values file, etc.
- And then do a helm upgrade to fully transform the release
Important note
Given past experience with other features (cough...cough --wait
), this feature will be marked as alpha on initial release to allow for more testing across the various use cases. It will graduate to beta after that testing. It will not be marked as fully ready until Helm 3.0 or it is decided by the core maintainers (obviously with feedback from community testing) that it is ready, whichever comes first
Let me know if this looks like the best path forward
Do you intend to save all of the objects' current state, or will you filter it, per my earlier comment?
@seh I would definitely drop the current status and probably some of the annotations that contain the last version of the object. When we create a PR, we can make sure all of those cases are covered
After talking in the developer meeting, the choice was made to develop this as a plugin.
I will be developing the plugin here. Feel free to follow along while I work on it
@thomastaylor312 How complete is your plugin?
@jamorales-bsft Not close at all yet. I don't get to work on it at my day job, so it is coming along slowly
@thomastaylor312 np thanks for the update!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
Anyone still looking at this? With the new Helm version 3, it has become more difficult to achieve the problem outlined above.
To summarize the main issue: when resources are created outside of Helm, it isn't currently possible to migrate them into a new Helm release without re-creating the resources. Ideally, the conflicting resources could instead be merged to avoid downtime and resource re-creation.
#7082 is related, but the proposed solutions won't solve the problem to adopt/assimilate non-helm managed resources, which is a feature I would like to see as well.
@phillycheeze so the complexity is still the same as it was in Helm 2, though the new SDK in Helm 3 should make it a bit easier from the code side of things. Basically there are a few steps:
- Decide how we want to mark resources for commandeering
- Grab each of those resources and trim off any unneeded fields (similar to
--export
inkubectl
) - Create a new chart and output each resource as a file inside of
templates/
- Combine the resources into a single manifest that is stored in a release.
- Save the release
There should be no recreating of resources. Further enhancements could include a way to commandeer resources using an existing chart (probably with annotations), but I think the first draft would be to just capture a list of resources.
There was some discussion at the last Helm call and some other people might take this on. Otherwise, I'll see about getting back around to this in January or February
This project hasn't seen activity in a while (and only works for Helm 2), but there's this: https://github.com/bmatcuk/helm-take-ownership
@bacongobbler - unfortunately, https://github.com/bmatcuk/helm-take-ownership is very Kubernetes object-centric (i.e. it doesn't work for Openshift object types or other CRDs). Not sure how that project could even be adapted to incorporate some of the concepts detailed in #2730 (comment), namely the ability to supply a:
Comma delimited list of
<kind>:<name>
(e.g.Deployment:www-frontend
)
due to it's having logic for each Kubernetes object type. https://github.com/kubepack/chartify suffers from the same issue.
I'm just pointing out a solution that happens to work for a small subset of users. The idea could likely be expanded to support use cases like yours.
What is the status of this issue? It's been almost 3 years and there's still no proper solution for importing resources into helm.
On the Helm side, our position remains the same: This is not a feature we are adding to Helm, as it is outside of the scope of the project. However, we are deeply supportive of a plugin-based approach. If you have some ideas and a design, please feel free to post it here.
I am moving this off of the 3.2 milestone and onto 3.3 for now.
moving off of 3.3. Haven't seen any activity on this proposal.
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.
Hello @bacongobbler
I've been inspired by the proposal and I created a helm v3 plugin called adopt
to help users to adopt existing k8s resources into a new generated helm chart. see helm-adopt repo
Sorry, but this sounds too complicated. How about proposal 3:
add a new flag to upgrade
and install
called --auto-adopt-exiting-resources
that does that and just prints a warning:
warning: importing existing resource XXXX, which was not being managed by helm
or
warning: importing existing resource YYYYY, which was previously being managed by chart ZZZZZ
I agree with the last comment. Why not even just call the flag --adopt
. When set a resource that already exists on the cluster just gets the helm annotations and doesn't have to be deleted and recreated, which could mean downtime for production services.
The flag expresses intentionality and the current behavior would be untouched and there would be no braking changes to helm users of the current version.
Ultimately I think this might even help adoption of helm if we make onboarding of existing deployments easier.
Agreed with the comment above as well, as it's more declarative. Any imperative statements that would need to be run manually for adopting resources makes it a pain for pipelines. #7649 seems like a perfectly fine option to declare what is in a release and anything else is not.
If there is still interest here, I suspect the ideas would best be incorporated into a HIP?
I found this blog post from a team that was able to migrate objects to helm by manually creating and editing state until helm accepted them. https://medium.com/beekeeper-technology-blog/zero-downtime-migration-to-helm-chart-37c562b24c2d
I will say that I am not working on this at all right now! But if someone does want to take the idea and roll it into a HIP, please feel free to do so!
Wrote up a quick function to do this automatically...
function helm-install-takeover() {
release=$1
release_namespace=$2
shift
shift
while read -r line; do
kind=`<<<$line cut -d/ -f1`
nokind=`<<<$line cut -d/ -f2`
namespace=`<<<${nokind} rev |cut -d. -f1|rev`
name=${nokind%$namespace}
name="${name%.}"
kubectl annotate $kind $name ${namespace:+--namespace=$namespace} meta.helm.sh/release-name=$release
kubectl annotate $kind $name ${namespace:+--namespace=$namespace} meta.helm.sh/release-namespace=$release_namespace
kubectl label $kind $name ${namespace:+--namespace=$namespace} app.kubernetes.io/managed-by=Helm
done <<< $(helm template $release -n $release_namespace $@ | kubectl grep -s)
helm install $release -n $release_namespace $@
}
# usage
helm-install-takeover istiod istio-system istio/istiod
(depends on https://github.com/howardjohn/kubectl-grep, you could do it without but that is a bit too much bash for me)
When I do the equivalent of what you wrote @howardjohn the install ends up in a weird state. It won't set any labels that exist in the helm chart but didn't exist before. This is also described in this comment that was my first hit while researching this and led me on to this issue.
Even further helm upgrade don't seem to apply new labels, as far as I can tell.
Even further helm upgrade don't seem to apply new labels, as far as I can tell.
Using helm diff and applying/installing it with the env var export HELM_DIFF_THREE_WAY_MERGE=true
may solve your issue.
*It's better if you only diff first to see what's gonna change.
Even further helm upgrade don't seem to apply new labels, as far as I can tell.
Using helm diff and applying/installing it with the env var
export HELM_DIFF_THREE_WAY_MERGE=true
may solve your issue.
Thanks for the reply. Just like helm get manifest
shows the labels that aren't actually there, the helm diff
output also seem to imply it thinks the labels are set, eventhough they aren't on the actual object. It seems to me that the initial helm install
doesn't modify anything but sets the state to mark as if it had.
The obvious workaround is to make sure the resources match completely before running helm install
. But with automated tooling, that's sometimes easier said than done...