kelseyhightower/kubeadm-single-node-cluster

Failed to create load balancer:

pydevops opened this issue · 3 comments

Error
User "system:kube-controller-manager" cannot create configmaps in the namespace "kube-system

Analysis:
This is due to an issue in kubeadm: kubernetes/kubeadm#425.
kubeadm version: 1.7.5

Next:
After apply the rbac fix from kubernetes/kubeadm#425. I got the "Cannot EnsureLoadBalancer() with no hosts" for kubectl describe svc/nginx.
possibly kubernetes/kubernetes#34866

I'm working on a fix for the RBAC issue.

Cannot EnsureLoadBalancer() with no hosts occurs because the service controller specifically ignores the master via label (https://github.com/kubernetes/kubernetes/blob/108ee220966677063d84aca7c9842320b6309499/pkg/controller/service/service_controller.go#L597) but this is the only node in the cluster.

Ran into this as well with 1.7.5 with single node cluster. It only occurs if I build app using YAML file but if I perform kubectl run .... then, then kubectl expose .... it works fine.

Any workarounds because I need to use the YAML config to deploy a CloudSQL proxy container along with app "sidecar"??? Thx!

Works

kubectl run CONTAINER_NAME --image=gcr.io/PROJECT_NAME/IMAGE_NAME:latest --port=80
kubectl expose deployment CONTAINER_NAME --type=LoadBalancer --port 80 --target-port 80

Fails

kubectl create -f deployment.yml
kubectl expose deployment CONTAINER_NAME --type=LoadBalancer --port 80 --target-port 80

Error

Error creating load balancer (will retry): Failed to create load balancer for service default/CONTAINER_NAME: Cannot EnsureLoadBalancer() with no hosts

Deployment YAML FILE

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: CONTAINER_NAME
spec:
  replicas: 0
  template:
    metadata:
      labels:
        app: CONTAINER_NAME
    spec:
      containers:
        - image: gcr.io/PROJECT_NAME/IMAGE_NAME:latest
          name: CONTAINER_NAME
          env:
            - name: DB_HOST
              value: 127.0.0.1:3306
            # [START cloudsql_secrets]
            - name: CLOUDSQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: password
            - name: CLOUDSQL_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
            # [END cloudsql_secrets]
          ports:
            - containerPort: 80
              name: CONTAINER_NAME
        # [START proxy_container]
        - image: gcr.io/cloudsql-docker/gce-proxy:1.09
          name: cloudsql-proxy
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances=[PROJECT_NAME:ZONE:DB_NAME=tcp:3306",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
        # [END proxy_container]
      # [START volumes]
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:
      # [END volumes]

The service controller explicitly ignores the master node, so this error message is expected.