roboll/helmfile

Labels should allow excluding chart by default

amnk opened this issue ยท 21 comments

amnk commented

This is probably a feature request.

Labels in current form allow a nice separation of charts, but they do not change the default helmfile sync behavior, when all charts are executed. For example, if we follow approach described in https://github.com/roboll/helmfile/tree/master/examples#managing-oneshot-jobs-with-helmfile, the usual helmfile sync would execute everything, while it might be natural only regular charts without that one time job.

Would it be possible to allow exclusion of some chart based on labels, or all pipelines should just included explicit --selectors?

@amnk Good point. Honestly I'm not very sure but would --selector job!=dbmigrator exclude the oneshot job only? It isn't straightforward even if it worked, but I don't have a better alternative.

Also, it doesn't address your concern that helmfile sync should run long-running services only. Would it make sense to add defaultSelector to helmfile.yaml, which is applied to helmfile sync when no --selector is specified?

amnk commented

@mumoshu defaultSelectors (plural) makes sense for me. For example, if someone has tier: backend, tier: frontend and tier: ci, he might want to use tier: backend and tier: frontend by default.

I hope that is not too much to ask for :)

@amnk Thanks. Understood :) The last thing I'm considering now is that how we could harmonize the feature with the helmfile.d feature.

In nutshell, helmfile -f dir/ sync is almost equivalent to helmfile -f dir/00-foo.yaml sync PLUS helmfile -f dir/01-foo.yaml sync. In which yaml file would you like to define the defaultSelectors?

Possible options:

  1. Introduce an another yaml file to define global options like defaultSelectors ๐Ÿค”
  2. Merge defaultSelectors from multiple yaml files ๐Ÿค”

Any thoughts? @osterman @sstarcher

amnk commented

@mumoshu I vote for merge of defaultSelectors from multiple files.

To give it a little bit more details: usually all those conf.d folders, where you can have several configuration files, are treated in the same way: configuration is read, then it is merged, then it is evaluated and checked for errors, then it is executed. If you have this flow in mind - automerge fits perfectly.

But if you want to evaluate them one by one - then the order of config files becomes very important, and it kinda removes some of the determinism.

@amnk Thanks! I'm trying to understand - what is your expected outcome for the below example?

helmfile.d/00-frontend.yaml:

defaultSelectors:
- tier: app

releases:
- name: frontend-app
  # ...
  labels:
    tier: app
- name: frontend-ci
  # ...

helmfile.d/01-backend.yaml:

releases:
- name: backend-app
  # ...
  labels:
    tier: app
- name: backend-ci
  # ...
amnk commented

@mumoshu both frontend-app and backend-app will be deployed.

I'm against merging helmfiles that are in the helmfile.d directory. It adds significant complication and disallows us from having separate configurations for those files. With the separation allowed by helmfile.d I can now specify a --timeout setting for one of the yaml files and it will not effect the other files.

If we were to implement a default selector you could put it into each of your helmfile.yaml files or they could each have different default selectors. Although overall I find the idea of a default selector skeptical and wonder why you don't just put them into a entirely separate helmfile in a different location as it sounds like you have operational things you want to separate out.

amnk commented

@sstarcher agreed, I could have put in the different file. But on the side my usecase is simple: replace multiple Helm calls with one helmfile call. And then selectors updating things only when they changed in code. Having several helmfiles complicates things, and sounded quit logical to me.

At the same time I have a question as well: how is helmfile.d different from a simple bash script?

for helmfile in *
do
    helmfile --file $helmfile sync
done

this can be easily enhanced with conditionals as well to put timeout for some files. Sorry if the question sounds dumb, and I'm most likely missing something here.

That's essentially what helmfile is doing for helmfile.d.

I would think it would be logically easier to have 2 separate helm files in your case.

  • Update things helmfile sync
  • Run jobs helmfile -f jobs sync

If we implemented merging it would remove alot of useful capabilities that we currently have in helmfile. I'm up for other solutions, but I can't think of one atm to make this easier. Unless we added firstclass support for jobs.

@osterman FYI, you'd be interested in this issue about default selectors.

@amnk @sstarcher Hey! So how about just adding defaults.selectors to helmfile.yaml, that is overridable via --selectors flag?

For the use-case of collocating releases for oneshot jobs and other long running apps in each helmfile.yaml, you may have the below snippet in all the yaml files:

defautls:
  selectors:
  - tier=frontend
  - tier=backend

And helmfile apply is literally translated to helmfile -l tier=frontend -l tier=backend, whereas helmfile --selectors tier=ci apply or helmfile --selectors tier=oneshot apply targets respective releases only.

In case "all" the helmfiles under helmfile.d/ has no releases matching the specified selector, helmfile fails. In other words it succeeds as long as at least one of helmfiles has matching releases.

Makes sense? Any comments are welcomed. Thanks.

So for implementing something like this I think it likely makes sense to support all helmfile specific command line variables and if they are set on the command line they would be overriden.

Maybe

options:
  selectors:
  - tier=frontend
  concurrency: 1
  auto-approve: true
amnk commented

@mumoshu I like that approach

@sstarcher I like your idea as well! :)

@sstarcher Sounds great. Would you like it if it is configurable per enviroment, too?

environments:
  prod:
    options:
    # override the top-level(default) options as you like
    # ...

options:
  selectors:
  - tier=frontend
  concurrency: 1
  autoApprove: true

@amnk Missed to mention you :) I appreciate your comment, too!

I would say until we have a strong usecase for supporting it per environment it likely just make sense to only support it at the top.

@sstarcher Noted. Thanks for the confirmation ๐Ÿ‘

As it the selectors are overridable via the --selector flag, I'm now against naming it options:. I'd prefer defaults::

defaults:
  selectors:
  - tier=frontend
  concurrency: 1
  autoApprove: true

But then the problem is that how we differentiate between helmDefaults vs defaults. It is confusing, especially given that we can also override what are provided in helmDefaults.

helmDefaults:
  tillerNamespace: tiller-namespace  #dedicated default key for tiller-namespace
  kubeContext: kube-context          #dedicated default key for kube-context
  # additional and global args passed to helm
  args:
    - "--set k=v"
  # defaults for verify, wait, force, timeout and recreatePods under releases[]
  verify: true
  wait: true
  timeout: 600
  recreatePods: true
  force: true

defaults:
  selectors:
  - tier=frontend
  concurrency: 1
  autoApprove: true

My idea is to rename helmDefaults just an alias to defaults:, and gradually deprecate helmDefaults. On the other hand I would say that defaults is for anything global to helmfile run, which is customizable via command-line flags.

I would like to know if selector can be regex?

Nope.

Is there any current way to do this? We are wanting to implement our long running DB migrations are HelmChart/jobs, but we also want to prevent someone from being able to run them by accident.

Moving the jobs into their own helmfile would be fine, but would still need a way to force passing something on the CLI before it runs. Even being able to set them to condition: false, and overriding that on the command line would work.

helmfile -f jobs.yaml -e qa1 --selector name=app1-job --set condition.enable true --set image.tag yadda yadda

even something like that would work, as long as helmfile -f jobs.yaml apply failed to run.