vitobotta/hetzner-k3s

Load Balancer is not adding targets

ThePlay3r opened this issue · 3 comments

Trying to follow the Setting up a cluster.md guide, I've successfully created a new cluster with 3 master nodes and 1 autoscaling pool.

It seems to work fine for the most part.

However, when I try to continue the guide and create a Hetzner Load Balancer, it doesn't add any targets to it (stays at 0), even after it gets a public IP.

I can add the targets manually, but I don't see much point in that.

Note: I set networking.private_network.enabled to false. Prior to this, the targets were added normally. (I've also done some other changes to the config, such as enabled cni and started using the autoscaler)

Config:

hetzner_token: <>
cluster_name: c5r-02-eu-central
kubeconfig_path: "./kubeconfig"
k3s_version: v1.29.1+k3s1

networking:
  ssh:
    port: 22
    use_agent: false # set to true if your key has a passphrase
    public_key_path: "~/.ssh/id_ed25519.pub"
    private_key_path: "~/.ssh/id_ed25519"
  allowed_networks:
    ssh:
      - 0.0.0.0/0
    api:
      - 0.0.0.0/0
  public_network:
    ipv4: true
    ipv6: true
  private_network:
    enabled : false
    subnet: 10.0.0.0/16
    existing_network_name: ""
  cni:
    enabled: true
    encryption: true
    mode: cilium
  # cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
  # service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
  # cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range


# manifests:
#   cloud_controller_manager_manifest_url: "https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.20.0/ccm-networks.yaml"
#   csi_driver_manifest_url: "https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.8.0/deploy/kubernetes/hcloud-csi.yml"
#   system_upgrade_controller_deployment_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/system-upgrade-controller.yaml"
#   system_upgrade_controller_crd_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/crd.yaml"
#   cluster_autoscaler_manifest_url: "https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml"

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgres://....

schedule_workloads_on_masters: false

# image: rocky-9 # optional: default is ubuntu-22.04
# autoscaling_image: 103908130 # optional, defaults to the `image` setting
# snapshot_os: microos # optional: specified the os type when using a custom snapshot

masters_pool:
  instance_type: cpx31
  instance_count: 3
  location: fsn1

worker_node_pools:
  - name: small-dv-fsn1
    instance_type: ccx23
    instance_count: 2
    location: fsn1
    autoscaling:
      enabled: true
      min_instances: 1
      max_instances: 4
  #- name: small-dv-nbg1
  #  instance_type: ccx23
  #  instance_count: 2
  #  location: nbg1
    # image: debian-11
    # labels:
    #   - key: purpose
    #     value: blah
    # taints:
    #   - key: something
    #     value: value1:NoSchedule
#  - name: medium-autoscaled
#    instance_type: cpx31
#    instance_count: 2
#    location: fsn1
#    autoscaling:
#      enabled: true
#      min_instances: 0
#      max_instances: 3

embedded_registry_mirror:
  enabled: true

# additional_packages:
# - somepackage

# post_create_commands:
# - apt update
# - apt upgrade -y
# - apt autoremove -y

# kube_api_server_args:
# - arg1
# - ...
# kube_scheduler_args:
# - arg1
# - ...
# kube_controller_manager_args:
# - arg1
# - ...
# kube_cloud_controller_manager_args:
# - arg1
# - ...
# kubelet_args:
# - arg1
# - ...
# kube_proxy_args:
# - arg1
# - ...
# api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.

ingress-nginx-annotations.yaml

# INSTALLATION
# 1. Install Helm: https://helm.sh/docs/intro/install/
# 2. Add ingress-nginx help repo: helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# 3. Update information of available charts locally from chart repositories: helm repo update
# 4. Install ingress-nginx:
# helm upgrade --install \
# ingress-nginx ingress-nginx/ingress-nginx \
# -f ./ingress-nginx-annotations.yaml \
# --namespace ingress-nginx \
# --create-namespace

# LIST of all ANNOTATIONS: https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/master/internal/annotation/load_balancer.go

controller:
  kind: DaemonSet
  service:
    annotations:
      # Germany:
      # - nbg1 (Nuremberg)
      # - fsn1 (Falkensteing)
      # Finland:
      # - hel1 (Helsinki)
      # USA:
      # - ash (Ashburn, Virginia)
      # Without this the load balancer won't be provisioned and will stay in "pending" state.
      # The state you can check via "kubectl get svc -n ingress-nginx"
      load-balancer.hetzner.cloud/location: fsn1

      # Name of load balancer. This name you will see in your Hetzner's cloud console (site) at the "Your project -> Load Balancers" page
      # NOTE: This is NOT the load balancer that the tool creates automatically for clusters with multiple masters (HA configuration). You need
      # to specify a different name here so it will create a separate load balancer for ingress Nginx.
      load-balancer.hetzner.cloud/name: c5r-02-eu-central-ingress

      # Ensures that the communication between the load balancer and the cluster nodes happens through the private network
      load-balancer.hetzner.cloud/use-private-ip: "true"

      # [ START: If you care about seeing the actual IP of the client then use these two annotations ]
      # - "uses-proxyprotocol" enables the proxy protocol on the load balancers so that ingress controller and
      # applications can "see" the real IP address of the client.
      # - "hostname" is needed just if you use cert-manager (LetsEncrypt SSL certificates). You need to use it in order
      # to fix fails http01 challenges of "cert-manager" (https://cert-manager.io/docs/).
      # Here (https://github.com/compumike/hairpin-proxy) you can find a description of this problem.
      # To be short: the easiest fix provided by some providers (including Hetzner) is to configure the load balancer so
      # that it uses a hostname instead of an IP.
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'

      # 1. "yourDomain.com" must be configured in the DNS correctly to point to the Nginx load balancer,
      # otherwise the provision of certificates won't work;
      # 2. if you use a few domains, specify any one.
      load-balancer.hetzner.cloud/hostname: creathors.com
      # [ END: If you care about seeing the actual IP of the client then use these two annotations ]

      load-balancer.hetzner.cloud/http-redirect-https: 'false'

If you disable the private network, you cannot set load-balancer.hetzner.cloud/use-private-ip: "true" in the load balancer annotations :)

Also, even if you make changes to the lb config manually, those changes will be reverted by kubernetes/CCM during reconciliation.

Somehow completely missed that, successfully deployed the new cluster and migrated to it, thanks for the great tool!

Glad it's working! Enjoy your new cluster :)