Failed to obtain an ip address for the svc of the loadbalancer
Closed this issue · 8 comments
My k8s cluster was installed with kind, and then I installed gitea with helm, and changed the ssh port of gitea to loadbalancer. I executed the command cloud-provider-kind in the terminal and kept it running in the foreground, but the following error was reported: I don't know the reason for the mistake, could you help me look at this problem, thank you
I0612 15:57:15.128072 48428 server.go:117] updating loadbalancer tunnels on userspace
I0612 15:57:15.204600 48428 tunnel.go:38] found port maps map[10000:55001 22:58004] associated to container kindccm-ZWHVOQK77FGBMGRRVRJDFYOQRB5DX3QHLYDGLG2F
I0612 15:57:15.245402 48428 tunnel.go:45] setting IPv4 address 172.18.0.6 associated to container kindccm-ZWHVOQK77FGBMGRRVRJDFYOQRB5DX3QHLYDGLG2F
E0612 15:57:15.248132 48428 controller.go:298] error processing service ops/gitea-ssh (retrying with exponential backoff): failed to ensure load balancer: exit status 1
I0612 15:57:15.248180 48428 event.go:389] "Event occurred" object="ops/gitea-ssh" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: exit status 1"
gitea-ssh This svc is always pending
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitea-http ClusterIP None <none> 3000/TCP 50m
gitea-postgresql ClusterIP 10.96.203.167 <none> 5432/TCP 50m
gitea-postgresql-hl ClusterIP None <none> 5432/TCP 50m
gitea-ssh LoadBalancer 10.96.112.178 <pending> 22:32233/TCP 6m4s
Is there any other information I need to provide ?
This is a complete message
$ cloud-provider-kind
I0612 16:03:40.322404 50319 controller.go:167] probe HTTP address https://127.0.0.1:59611
I0612 16:03:40.337922 50319 controller.go:88] Creating new cloud provider for cluster ops-ingress
I0612 16:03:40.348956 50319 controller.go:95] Starting cloud controller for cluster ops-ingress
I0612 16:03:40.349015 50319 node_controller.go:164] Sending events to api server.
I0612 16:03:40.349041 50319 node_controller.go:173] Waiting for informer caches to sync
I0612 16:03:40.349197 50319 controller.go:231] Starting service controller
I0612 16:03:40.349205 50319 shared_informer.go:313] Waiting for caches to sync for service
I0612 16:03:40.354355 50319 reflector.go:359] Caches populated for *v1.Service from pkg/mod/k8s.io/client-go@v0.30.1/tools/cache/reflector.go:232
I0612 16:03:40.355384 50319 reflector.go:359] Caches populated for *v1.Node from pkg/mod/k8s.io/client-go@v0.30.1/tools/cache/reflector.go:232
I0612 16:03:40.450294 50319 shared_informer.go:320] Caches are synced for service
I0612 16:03:40.451673 50319 controller.go:733] Syncing backends for all LB services.
I0612 16:03:40.451674 50319 controller.go:398] Ensuring load balancer for service ops/gitea-ssh
I0612 16:03:40.451701 50319 controller.go:842] Updating backends for load balancer ops/gitea-ssh with 3 nodes: [ops-ingress-worker ops-ingress-worker2 ops-ingress-worker3]
I0612 16:03:40.451724 50319 loadbalancer.go:28] Ensure LoadBalancer cluster: ops-ingress service: gitea-ssh
I0612 16:03:40.451725 50319 loadbalancer.go:34] Update LoadBalancer cluster: ops-ingress service: gitea-ssh
I0612 16:03:40.451939 50319 instances.go:47] Check instance metadata for ops-ingress-worker2
I0612 16:03:40.451991 50319 instances.go:47] Check instance metadata for ops-ingress-worker
I0612 16:03:40.452029 50319 instances.go:47] Check instance metadata for ops-ingress-control-plane
I0612 16:03:40.452038 50319 proxy.go:218] address type Hostname, only InternalIP supported
I0612 16:03:40.452044 50319 proxy.go:218] address type Hostname, only InternalIP supported
I0612 16:03:40.452048 50319 proxy.go:218] address type Hostname, only InternalIP supported
I0612 16:03:40.452106 50319 instances.go:47] Check instance metadata for ops-ingress-worker3
I0612 16:03:40.452051 50319 proxy.go:237] haproxy config info: &{HealthCheckPort:10256 ServicePorts:map[IPv4_22_TCP:{Listener:{Address:0.0.0.0 Port:22 Protocol:TCP} Cluster:[{Address:172.18.0.2 Port:32233 Protocol:TCP} {Address:172.18.0.5 Port:32233 Protocol:TCP} {Address:172.18.0.4 Port:32233 Protocol:TCP}]}] SessionAffinity:None}
I0612 16:03:40.453491 50319 event.go:389] "Event occurred" object="ops/gitea-ssh" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0612 16:03:40.456394 50319 proxy.go:255] updating loadbalancer with config
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
name: listener_IPv4_22_TCP
address:
socket_address:
address: 0.0.0.0
port_value: 22
protocol: TCP
filter_chains:
- filters:
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
access_log:
- name: envoy.file_access_log
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
stat_prefix: tcp_proxy
cluster: cluster_IPv4_22_TCP
I0612 16:03:40.514372 50319 loadbalancer.go:16] Get LoadBalancer cluster: ops-ingress service: gitea-ssh
I0612 16:03:40.556549 50319 instances.go:75] instance metadata for ops-ingress-worker2: &cloudprovider.InstanceMetadata{ProviderID:"kind://ops-ingress/kind/ops-ingress-worker2", InstanceType:"kind-node", NodeAddresses:[]v1.NodeAddress{v1.NodeAddress{Type:"Hostname", Address:"ops-ingress-worker2"}, v1.NodeAddress{Type:"InternalIP", Address:"172.18.0.5"}, v1.NodeAddress{Type:"InternalIP", Address:"fc00:f853:ccd:e793::5"}}, Zone:"", Region:"", AdditionalLabels:map[string]string(nil)}
I0612 16:03:40.557665 50319 instances.go:75] instance metadata for ops-ingress-worker3: &cloudprovider.InstanceMetadata{ProviderID:"kind://ops-ingress/kind/ops-ingress-worker3", InstanceType:"kind-node", NodeAddresses:[]v1.NodeAddress{v1.NodeAddress{Type:"Hostname", Address:"ops-ingress-worker3"}, v1.NodeAddress{Type:"InternalIP", Address:"172.18.0.4"}, v1.NodeAddress{Type:"InternalIP", Address:"fc00:f853:ccd:e793::4"}}, Zone:"", Region:"", AdditionalLabels:map[string]string(nil)}
I0612 16:03:40.557673 50319 instances.go:75] instance metadata for ops-ingress-worker: &cloudprovider.InstanceMetadata{ProviderID:"kind://ops-ingress/kind/ops-ingress-worker", InstanceType:"kind-node", NodeAddresses:[]v1.NodeAddress{v1.NodeAddress{Type:"Hostname", Address:"ops-ingress-worker"}, v1.NodeAddress{Type:"InternalIP", Address:"172.18.0.2"}, v1.NodeAddress{Type:"InternalIP", Address:"fc00:f853:ccd:e793::2"}}, Zone:"", Region:"", AdditionalLabels:map[string]string(nil)}
I0612 16:03:40.557884 50319 instances.go:75] instance metadata for ops-ingress-control-plane: &cloudprovider.InstanceMetadata{ProviderID:"kind://ops-ingress/kind/ops-ingress-control-plane", InstanceType:"kind-node", NodeAddresses:[]v1.NodeAddress{v1.NodeAddress{Type:"Hostname", Address:"ops-ingress-control-plane"}, v1.NodeAddress{Type:"InternalIP", Address:"172.18.0.3"}, v1.NodeAddress{Type:"InternalIP", Address:"fc00:f853:ccd:e793::3"}}, Zone:"", Region:"", AdditionalLabels:map[string]string(nil)}
I0612 16:03:40.570464 50319 node_controller.go:267] Update 4 nodes status took 119.998584ms.
I0612 16:03:40.574130 50319 controller.go:737] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I0612 16:03:40.574144 50319 controller.go:733] Syncing backends for all LB services.
I0612 16:03:40.574175 50319 controller.go:737] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I0612 16:03:40.574179 50319 controller.go:733] Syncing backends for all LB services.
I0612 16:03:40.574189 50319 controller.go:737] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I0612 16:03:40.574191 50319 controller.go:733] Syncing backends for all LB services.
I0612 16:03:40.574197 50319 controller.go:737] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I0612 16:03:40.615954 50319 server.go:101] creating container for loadbalancer
I0612 16:03:40.616227 50319 server.go:224] creating loadbalancer with parameters: [--detach --tty --label io.x-k8s.cloud-provider-kind.cluster=ops-ingress --label io.x-k8s.cloud-provider-kind.loadbalancer.name=ops-ingress/ops/gitea-ssh --net kind --init=false --hostname kindccm-ZWHVOQK77FGBMGRRVRJDFYOQRB5DX3QHLYDGLG2F --privileged --restart=on-failure --sysctl=net.ipv4.ip_forward=1 --sysctl=net.ipv6.conf.all.disable_ipv6=0 --sysctl=net.ipv6.conf.all.forwarding=1 --sysctl=net.ipv4.conf.all.rp_filter=0 --publish=22/TCP --publish-all envoyproxy/envoy:v1.30.1 bash -c echo -en 'node:
cluster: cloud-provider-kind
id: cloud-provider-kind-id
dynamic_resources:
cds_config:
resource_api_version: V3
path: /home/envoy/cds.yaml
lds_config:
resource_api_version: V3
path: /home/envoy/lds.yaml
admin:
access_log_path: /dev/stdout
address:
socket_address:
address: 0.0.0.0
port_value: 9901
' > /home/envoy/envoy.yaml && touch /home/envoy/cds.yaml && touch /home/envoy/lds.yaml && envoy -c /home/envoy/envoy.yaml]
I0612 16:03:40.796508 50319 server.go:109] updating loadbalancer
I0612 16:03:40.796563 50319 proxy.go:218] address type Hostname, only InternalIP supported
I0612 16:03:40.796569 50319 proxy.go:218] address type Hostname, only InternalIP supported
I0612 16:03:40.796574 50319 proxy.go:218] address type Hostname, only InternalIP supported
I0612 16:03:40.796585 50319 proxy.go:237] haproxy config info: &{HealthCheckPort:10256 ServicePorts:map[IPv4_22_TCP:{Listener:{Address:0.0.0.0 Port:22 Protocol:TCP} Cluster:[{Address:172.18.0.2 Port:32233 Protocol:TCP} {Address:172.18.0.5 Port:32233 Protocol:TCP} {Address:172.18.0.4 Port:32233 Protocol:TCP}]}] SessionAffinity:None}
I0612 16:03:40.796705 50319 proxy.go:255] updating loadbalancer with config
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
name: listener_IPv4_22_TCP
address:
socket_address:
address: 0.0.0.0
port_value: 22
protocol: TCP
filter_chains:
- filters:
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
access_log:
- name: envoy.file_access_log
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
stat_prefix: tcp_proxy
cluster: cluster_IPv4_22_TCP
I0612 16:03:40.847364 50319 proxy.go:266] updating loadbalancer with config
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
name: cluster_IPv4_22_TCP
connect_timeout: 5s
type: STATIC
lb_policy: RANDOM
health_checks:
- timeout: 5s
interval: 3s
unhealthy_threshold: 2
healthy_threshold: 1
no_traffic_interval: 5s
always_log_health_check_failures: true
always_log_health_check_success: true
event_log_path: /dev/stdout
http_health_check:
path: /healthz
load_assignment:
cluster_name: cluster_IPv4_22_TCP
endpoints:
- lb_endpoints:
- endpoint:
health_check_config:
port_value: 10256
address:
socket_address:
address: 172.18.0.2
port_value: 32233
protocol: TCP
- lb_endpoints:
- endpoint:
health_check_config:
port_value: 10256
address:
socket_address:
address: 172.18.0.5
port_value: 32233
protocol: TCP
- lb_endpoints:
- endpoint:
health_check_config:
port_value: 10256
address:
socket_address:
address: 172.18.0.4
port_value: 32233
protocol: TCP
I0612 16:03:50.967126 50319 server.go:117] updating loadbalancer tunnels on userspace
I0612 16:03:51.038936 50319 tunnel.go:38] found port maps map[10000:55002 22:59956] associated to container kindccm-ZWHVOQK77FGBMGRRVRJDFYOQRB5DX3QHLYDGLG2F
I0612 16:03:51.078350 50319 tunnel.go:45] setting IPv4 address 172.18.0.6 associated to container kindccm-ZWHVOQK77FGBMGRRVRJDFYOQRB5DX3QHLYDGLG2F
E0612 16:03:51.081207 50319 controller.go:298] error processing service ops/gitea-ssh (retrying with exponential backoff): failed to ensure load balancer: exit status 1
I0612 16:03:51.081271 50319 event.go:389] "Event occurred" object="ops/gitea-ssh" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: exit status 1"
Had the same problem. Even running the example kubectl apply -f https://kind.sigs.k8s.io/examples/loadbalancer/usage.yaml
on a newly created cluster throws the same "address type Hostname, only InternalIP supported".
OS: macOS Sonoma
Containerization: Colima/docker engine.
@zspsole Running cloud-provider-kind on a mac requires sudo
sudo cloud-provider-kind
do docker ps -a
@zspsole Running cloud-provider-kind on a mac requires sudo
sudo cloud-provider-kind
yep, that was it. Thank you!
@zspsole Running cloud-provider-kind on a mac requires sudo
sudo cloud-provider-kind
yep, that was it. Thank you!
I need to figure out if we can detect this and warn the users or fail to even run
I met the same problem in WSL2.
solved by add sudo
can you figure the reason out?
Adding IPs to interfaces is a privileged operation