vmware-archive/wardroom

kubeadm.conf updates not reflected in masters or nodes

Closed this issue · 1 comments

/kind bug

What steps did you take and what happened:

I updated the kubeadm sections (need the TTL featuregate enabled - requires updates to control plane manifests as well as kubelet cmd line flags) in ansible config (eg kubernetes_common_kubeadm_config_kubeletconfiguration, kubernetes_common_kubeadm_config_clusterconfiguration) - kubeadm.conf file gets placed but no changes get distributed

What did you expect to happen:

Phases that update kubelet config and control plane manifests should be run on the masters (kubeadm init phase kubelet-start --config /etc/kubernetes/kubeadm.conf and kubeadm init phase control-plane all --config /etc/kubernetes/kubeadm.conf), configs should be uploaded to the in-cluster configmaps (kubeadm init phase upload-config all --config /etc/kubernetes/kubeadm.conf), workers should run /usr/bin/kubeadm join phase kubelet-start --config /etc/kubernetes/kubeadm.conf to update /var/lib/kubelet/{config.yaml,kubeadm-flags.env} with any changes in kubeadm.conf - needed since many but not all options have moved from command line flags to config file.

Anything else you would like to add:

Requires missing sections from issue #210 - clusterconfig, kubeletconfig, initconfig, joinconfig.

Also requires dropping different kubeadm.conf files - on masters should have clusterconfig, kubeletconfig, initconfig (?); on nodes should only have initconfig and joinconfig, and those configs should have the master's token embedded.

Environment:

  • Wardroom version: branch

1.14

  • OS (e.g. from /etc/os-release):

Ubuntu 18.04

This is addressed by #215