kube-vip/helm-charts

Cloud provider helm deployment needs updated - affinity missing? no HA

Jamison1 opened this issue · 1 comments

In attempting to deploy the helm chart for kube-vip-cloud-provider I noticed podAntiAffinity wasnt picked up from the values file for deploying a HA setup to multiple nodes. This ended up with all pods deployed to the same node - defeating the HA setup.

The helm Deployment file seems to not pick up the affinity section from the values file.

This is different than what is deployed from the normal deployment file.

Expectations: If adding podAntiAffinity to the values file I expect that to be respected and deploy 1 pod per node.

Here is the deployment yaml when attempting to run a 3 replica deployment with helm

kubectl describe deployment -n kube-system kube-vip-cloud-provider
Name:                   kube-vip-cloud-provider
Namespace:              kube-system
CreationTimestamp:      Wed, 04 Oct 2023 01:21:34 +0000
Labels:                 app.kubernetes.io/managed-by=Helm
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: kube-vip-cloud-provider
                        meta.helm.sh/release-namespace: kube-system
Selector:               app.kubernetes.io/instance=kube-vip-cloud-provider,app.kubernetes.io/name=kube-vip-cloud-provider
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=kube-vip-cloud-provider
                    app.kubernetes.io/name=kube-vip-cloud-provider
  Service Account:  kube-vip-cloud-provider
  Containers:
   kube-vip-cloud-provider:
    Image:      kubevip/kube-vip-cloud-provider:v0.0.7
    Port:       <none>
    Host Port:  <none>
    Command:
      /kube-vip-cloud-provider
      --leader-elect-resource-name=kube-vip-cloud-controller
    Limits:
      cpu:     100m
      memory:  128Mi
    Requests:
      cpu:        50m
      memory:     64Mi
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-vip-cloud-provider-754c6674b (3/3 replicas created)
Events:          <none>

Here is the pod yaml showing tolerations but no affinity

kubectl describe pod -n kube-system kube-vip-cloud-provider-754c6674b-tphwm
Name:             kube-vip-cloud-provider-754c6674b-tphwm
Namespace:        kube-system
Priority:         0
Service Account:  kube-vip-cloud-provider
Node:             server-01/x.x.x.x
Start Time:       Wed, 04 Oct 2023 01:21:35 +0000
Labels:           app.kubernetes.io/instance=kube-vip-cloud-provider
                  app.kubernetes.io/name=kube-vip-cloud-provider
                  pod-template-hash=754c6674b
Annotations:      <none>
Status:           Running
IP:               10.42.0.148
IPs:
  IP:           10.42.0.148
Controlled By:  ReplicaSet/kube-vip-cloud-provider-754c6674b
Containers:
  kube-vip-cloud-provider:
    Container ID:  containerd://f273286093b6e2d50a5eb662b6ea92a2909fb4eced4c80053fcce37be9f4d057
    Image:         kubevip/kube-vip-cloud-provider:v0.0.7
    Image ID:      docker.io/kubevip/kube-vip-cloud-provider@sha256:07bc28af895dc8bc04489bf79a92f74d4b0e325c863b6711cc0655b4ca0fd19b
    Port:          <none>
    Host Port:     <none>
    Command:
      /kube-vip-cloud-provider
      --leader-elect-resource-name=kube-vip-cloud-controller
    State:          Running
      Started:      Wed, 04 Oct 2023 01:21:46 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  128Mi
    Requests:
      cpu:        50m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6fgkp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6fgkp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
Events:                      <none>

What works without helm

This setup works when using a manifest file instead of helm

      affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    component: kube-vip-cloud-provider
                topologyKey: kubernetes.io/hostname
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: node-role.kubernetes.io/master
                  operator: Exists
              - matchExpressions:
                - key: node-role.kubernetes.io/control-plane
                  operator: Exists

The helm chart you see above didn't implement affinity. It's only used for kube-vip daemonset

There should be something similar as above in this file https://github.com/kube-vip/helm-charts/blob/main/charts/kube-vip-cloud-provider/templates/deployment.yaml to support it