kubernetes/kubernetes

Implement templates

bprashanth opened this issue ยท 33 comments

Tracks the implementation of the templating proposal: https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/templates.md.

This wip is essentially a quick merge of os templates with HEAD + some modifications to get the core substitution logic and resource to work with apiserver: #23895

See proposal for details on what remains, from memory:

  • (()) syntax
  • /processedTemplates and /templates api endpoints
  • pkg/client
  • kubectl integration
  • validation
  • Unittests for templates, etcd etc
  • e2es
  • docs

Other items to track

  • Template processing
  • Template create validation + etcd_test.go tests
  • Template update validation + etcd_test.go tests
  • Processing templates without saving them

@bgrant0607 mentioned someone from @kubernetes/kubectl might be interested.
@bparees since you had the original proposal.

An implementation in Rust:
https://github.com/InQuicker/ktmpl

Can we have a template feature in https://github.com/kubernetes/features as an umbrella for the ongoing implementations?

I'll create one.

Does that mean we have review bandwidth?

I have review bandwidth, though my review perspective is largely limited to the original proposal intent and what was done in openshift, not so much "is this how we want it to be integrated into k8s"

SGTM. Let me sync up with Brian on our goals. A good chunk of Templates is implemented. Would like to get feedback on some changes I made to the original proposal.

Hey all, we took a stab at CLI-based parameterization, which is related to templating. Our goal was to experiment with the simplest way to expose necessary parameters to users without necessitating a knowledge about how Kubernetes objects worked. We haven't gotten into exposing the authoring of templates yet, but the template proposal mentioned above looks really interesting - how far along is work on parameterization? Would love to help if we can!

/cc @kubernetes/huawei

Some potentially relevant discussion over at #30716.

This is slightly off topic, but there is a risk that templated yamls will end up as another GCL after a few iterations - turing complete, bad programming language.

Some projects decide to embrace the "Configuration as code" - for example airflow explains their case for configuration as python code rather than yaml/json in http://nerds.airbnb.com/airflow/ - "While yaml or json job configuration would allow for any language to be used to generate Airflow pipelines, we felt that some fluidity gets lost in the translation. Being able to introspect code (ipython!, IDEs) subclass, meta-program and use import libraries to help write pipelines adds tremendous value."

I am in progress of deleting all my handwritten yamls and using https://github.com/kubernetes/client-go to define my configuration as go objects and later export them to yaml using json.NewYAMLSerializer. It feels like better solution than yaml templating for moderately complex configurations. For example consider editor support or type safety - I can just create a function that returns a Container object and see all the api methods that accept the created Container.

This is slightly off topic, but there is a risk that templated yamls will end up as another GCL after a few iterations - turing complete, bad programming language.

@kozikow explicitly avoiding turing completeness is one of the principles of the template proposal. the capabilities are intentionally very limited. But we do not want to throw people into the deep end of "write your object definitions in go/python/etc". that option is always available, of course, and people will always be free to choose it if it meets their needs, but that's not the role this is intended to fill.

I initially landed on this issue evaluating different options for dynamic config generation. I can't find a better place than this issue to add more options to consider for people like past-me, investigating options for dynamically generated k8s configs - I wrote a post about it: https://kozikow.com/2016/09/02/using-go-to-autogenerate-kubernetes-configs/ .

Does this cover the case of docker images specified in the deployment spec to come from a template ? It would be sweet to have the docker image come from the config map

@bprashanth why are we putting this into core, when there are tools like helm that are handling this?

https://github.com/atlassian/smith is a slightly different take on this.

To answer your question in brief (the template proposal covers this a lot better) - Helm and Templates are different tools and have different strengths.

The strengths of templates is that they can be stored on the API server as API objects, that they can define parameterization clearly, and that they are simple enough not to require code execution in order to process their templates (they are not turing complete, so they are valid API representations). Also, an entire template is a single file, so it is a piece of config and can be transmitted more easily than zipped repos (that's a minor point, but important when discussing ease of use). Templates force the API to be good enough to represent the problems we want to solve. We hear a lot of requests to improve parameterization and describe parameters in more complex ways (think form builders) so that central operations teams can give developers choices about what to create. Something like Helm would need that formally defined - and that's what Templates represent (imagine a Template with N parameters that contains a single API object that deploys a helm chart to the current namespace - the parameters are nicely described in a UI, but Helm does the apply / create syntax).

The strength of Helm is that charts can be more flexible, and that the Helm tiller server can manage rolling out changes and recording the history of a chart deployment. Helm can do things that templates cannot, but requires that extra execution environment and is both more verbose and less "kube API like" (at least today).

Something like Helm is also a very simple Ansible or Puppet, so then the question might be - why is Helm recreating Ansible or Puppet? My answer would be that Helm trades flexibility for strength of purpose - simpler, more compact, and easier to approach to a novice. In the long term I believe Helm will need more flexibility than it has today to deal with more complex servers, and so better integration into config management tools is a requirement for Kubernetes. Templates do the same thing - they focus on 80% case of parameterization and expect the API objects to provide the richness of purpose.

Finally, templates can deal with anything that is an API object, which means if Helm was a proper Kube "extension" then templates could instantiate Helm charts (if Helm had a list of charts that were deployable). That use case is very important to a lot of the people who deploy OpenShift or Kubernetes - the ability to have a catalog of content to create that is centralized and can be rolled out by individuals. Service catalog in time will help with that, but that's more black box software (not the white box software that both Helm and Templates represent).

Since this originated before we had "core" as a term, I'd say that templates are no more "core" than helm, but templates are more like a Kube extension in design (literally, an API object that extensions Kube) while Helm is more of a service that runs on Kube, so templates requires more design work (which is the proposal ben linked to). Templates are certainly more like Ingress than Pods - having an API you can query to list all of the things you can "create" is extremely valuable, but you may have equally valid ways of creating content.

Hope that gives some context.

A broader overview ("Whitebox COTS application management"), which is shared with kubernetes-dev, kubernetes-sig-apps, and kubernetes-sig-service-catalog:

https://docs.google.com/document/d/1S3l2F40LCwFKg6WG0srR6056IiZJBwDmDvzHWRffTWk/edit#

Independent implementation:
https://github.com/InQuicker/ktmpl

When do you suppose this will come to fruition? I'm having to jerry rig Yaml file parametrization using some other bash str sub tool.

@evictor I don't think a lot of work is being done on this, in favour of third-party tools.

The "big" one is Helm, but it comes with a lot of added complexity (but also a lot of pre-packaged "charts").

If you're looking for something simpler (templating with a notion of different environments/clusters), take a look at kontemplate.

Thanks for the pointers. Kontemplate looks good. For POC purposes I am using envsubst which is just a very simple command line tool.

rot26 commented

@evictor I did not get envsubst working on mac. I started using good ol' sed until I googled my way to sigil for text replace. (good features, not specific to kubernetes)
After I set everything up with sigil, I discovered helm. I will start with helm next time for kubernetes.

@rot26 I was able to install it relatively painlessly using brew install gettext, then bringing envsubst binary into PATH; I found envsubst binary in /usr/local/Cellar/gettext/0.19.8.1/bin.

There are many, many tools that can do parameter substitution and other forms of config generation:
Helm
OC new-app
Kompose
Spread
Draft
Ksonnet/Kubecfg
Konfd
Templates/Ktmpl
Fabric8 client
Kubegen
kenv
Ansible
Puppet
KPM
Nulecule
OpenCompose / kedge
Chartify
Podex
k8sec
Kploy
kb80r
k8s-kotlin-dsl
KY
kdeploy
K8comp
Kontemplate
kexpand
Forge
Deploymentizer
Broadway
srvexpand

(links will go elsewhere)

For now, the client-side implementation exists.

Deleted snarky comments aside, we use Kubernetes because it is an opinionated framework. Kubernetes is able to grow so quickly because there is one good implementation of every needed feature for the rest of the Kubernetes community to build on. IMO, it doesn't make sense for Kubernetes to be opinionated about everything except templating.

I suggest that people interested in helping with this topic participate in the Application Definition Working Group:

https://github.com/kubernetes/community/tree/master/wg-app-def