Kubernetes - Azure Load Balancer provisioning fails - ip is referenced by multiple ipconfig
foram31k opened this issue · 12 comments
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): ISSUE
My service.yaml file is
apiVersion: v1
kind: Service
metadata:
name: abc-ms
labels:
name: abc-ms
name: abc-ms
spec:
sessionAffinity: ClientIP
type: LoadBalancer
loadBalancerIP: x.x.x.x
ports:
- name: "3007"
port: 3007
#targetPort: 3007
selector:
name: abc-ms
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Orchestration tool: Kubernetes deployed on Azure
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
What happened:
I've provisioned an ACS cluster with Kubernetes. When deploying a Pod with Service, I noticed that kube-controller-manager on master cannot provision Azure Load Balancer for the Service I deployed with type: LoadBalancer
If I do kubectl get services I see my service External-IP in pending state
On describing the service it gave me error
Error creating load balancer (will retry): Failed to create load balancer for service dev-kube/abc-ms: network.LoadBalancersClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="PublicIPReferencedByMultipleIPConfigs" Message="Public ip address /subscriptions/23b/resourceGroups/abc/providers/Microsoft.Network/publicIPAddresses/dev-kube-ip is referenced by multiple ipconfigs in resource /subscriptions/23b/resourceGroups/abc/providers/Microsoft.Network/loadBalancers/abc." Details=[]
What you expected to happen:
I expect my service to get an external IP by provisioning a load balancer when deploying a kubernetes service with type: LoadBalancer
The PublicIP in loadBalancerIP: x.x.x.x
has been used in other places. Could you change it to another one or just remove it from serviceSpec?
@feiskyer yes, your solution worked.
But I want to use same LB IP to deploy different services running on different ports.
Can't we do that ?
@foram31k I think you could use an ingress for this. See https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md for usage.
Thank you for the suggestion.
Will surely look into it.
I have the same issue - the first time it worked ok (the ip had not been assigned to anything yet, and so on deployment it was auto assigned to the L). However when i closed everything down and started it up again, it would say that the IPs are already assigned (same error message). Other than creating new IP address, is there a way to reuse what i already have.
Any news? Same issue in K8S 1.12.3
@IvanovOleg What errors have you got? Could you share the error message of kube-controller-manager?
apiVersion: v1
kind: Service
metadata:
name: azure-load-balancer
spec:
loadBalancerIP: <existing pip>
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-load-balancer
If public ip exists and not connected to the actual lb as a front-end configuration it works fine. If lb exists and existing public ip is connected to lb - it fails with the next error:
network.LoadBalancersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="PublicIPReferencedByMultipleIPConfigs" Message="Public ip address /subscriptions/<sub id>/resourceGroups/<rg name>/providers/Microsoft.Network/publicIPAddresses/<pip name> is referenced by multiple ipconfigs in resource /subscriptions/<sub id>/resourceGroups/<rg id>/providers/Microsoft.Network/loadBalancers/<lb name>." Details=[]
I use terraform for creating an entire cluster. Because terraform cann't manage resources that are not defined in it's configuration I have to create kubernetes service lb, public ip and fron-end configuration using terraform and use existing resources in kubernetes. When I create service in kubernetes with predefined IP address, it ignores an existing front-end configuration of the lb and tries to create another one using the same public ip which is forbidden. If I allow kubernetes to create front-end configuration, then on the next execution of the terraform apply it will destroy it. In this case I will have to manually import that resource to the terraform state to avoid termination which is not suitable.
A public IP couldn't be referred by multiple LB frontend configurations, hence If lb exists and existing public ip is connected to lb, it fails
is expected error. (This is the limitation of Azure ARM API.)
If you need to use a pre-defined public IP, you can create the public IP first and then set it to spec.loadBalancerIP
.But be sure not connect it to existing frontend configurations. (This is the limitation from kubernetes, which ensures the frontend IP configurations are managed)
@feiskyer It doesn't fix my problem. Terraform will destroy this front-end configuration on the next execution, because it wasn't defined. Will be great to have some kind of annotation that allows to use an existing front-end configuration.
Is it possible to do something like this:
spec:
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx
ports:
- name: svc-dashboard
protocol: TCP
port: 8080
targetPort: 9080
- name: svc-http-mgmt
protocol: TCP
port: 8090
targetPort: 9090
- name: svc-core
protocol: TCP
port: 8070
targetPort: 9070
I get the same problem. But this is all from one single service, not multiple services using the same IP.