kubernetes-retired/kube-aws

Upgrade Kubernetes to 1.14.8

qnapnickchang opened this issue · 5 comments

Hi,

We update k8s from 1.11.3 to 1.14.8.
But We get the error message.

Oct 18 08:40:32 ip-10-0-1-86.us-west-2.compute.internal sh[1942]: W1018 08:40:32.950098 1942 options.go:267] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [kubernetes.io/role node-role.kubernetes.io/master]
Oct 18 08:40:32 ip-10-0-1-86.us-west-2.compute.internal sh[1942]: W1018 08:40:32.950899 1942 options.go:268] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Oct 18 08:40:32 ip-10-0-1-86.us-west-2.compute.internal sh[1942]: F1018 08:40:32.952137 1942 server.go:194] failed to load Kubelet config file /etc/kubernetes/config/kubelet.yaml, error failed to decode, error: v1beta1.KubeletConfiguration.KubeReserved: ReadMapCB: expect { or n, but found ", error found in #10 byte of ...|eserved":"cpu=100m,m|..., bigger context ...|i"},"kind":"KubeletConfiguration","kubeReserved":"cpu=100m,memory=100Mi,storage=2Gi","rotateCertific|...

In our cluster.yaml, I set :

kubelet:
  kubeReserved: "cpu=100m,memory=100Mi,ephemeral-storage=2Gi"
  systemReserved: "cpu=100m,memory=100Mi,ephemeral-storage=2Gi"

Anyone have any suggestion?

Thanks for your help

I just came across this as well. The template for /etc/kubernetes/config/kubelet.yaml is rendering the string as is.

kubeReserved: "cpu=100m,memory=100Mi,ephemeral-storage=2Gi"

This may have worked in the past but I think the reservations are suppose to be rendered like:

kubeReserved:
   cpu: 1000m
   memory: 1024Mi
   ephemeral-storage: 1024Mi

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.