Current version may not be compatible with K8s 1.8.4-gke.0
mshappe opened this issue · 3 comments
I just tried to spin up a cluster using configurations that worked before with a 1.8.2-gke.0 cluster. The new cluster is using 1.8.4-gke.0, and when I apply the kube-lego configs, I get this error repeatedly in my log, and the service never actually spins up.
reflector.go:201] github.com/jetstack/kube-lego/pkg/kubelego/watch.go:112: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User "system:serviceaccount:kube-lego:default" cannot list ingresses.extensions at the cluster scope: Unknown user "system:serviceaccount:kube-lego:default"
You'll need to set up RBAC. For example, the yaml for my kube-lego setup generated with https://github.com/kubernetes/charts/tree/master/stable/kube-lego looks as follows:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: kube-lego
chart: kube-lego-0.3.0
heritage: Tiller
release: kube-lego2
name: kube-lego2-kube-lego
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: kube-lego
chart: kube-lego-0.3.0
heritage: Tiller
release: kube-lego2
name: kube-lego2-kube-lego
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- create
- get
- delete
- update
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- update
- create
- list
- patch
- delete
- watch
- apiGroups:
- ""
resources:
- endpoints
- secrets
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: kube-lego
chart: kube-lego-0.3.0
heritage: Tiller
release: kube-lego2
name: kube-lego2-kube-lego
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-lego2-kube-lego
subjects:
- kind: ServiceAccount
name: kube-lego2-kube-lego
namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kube-lego
chart: kube-lego-0.3.0
heritage: Tiller
release: kube-lego2
name: kube-lego2-kube-lego
spec:
replicas: 1
template:
metadata:
labels:
app: kube-lego
release: kube-lego2
spec:
serviceAccountName: kube-lego2-kube-lego
containers:
- name: kube-lego
image: "jetstack/kube-lego:0.1.5"
imagePullPolicy: "IfNotPresent"
env:
- name: LEGO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LEGO_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# admin@rehive.com
- name: "LEGO_EMAIL"
value: "admin@rehive.com"
- name: "LEGO_PORT"
value: "8080"
- name: "LEGO_URL"
value: "https://acme-v01.api.letsencrypt.org/directory"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
{}
You'll need a similar service account, role and rolebinding. Else, a quick and dirty shortcut could be to just grant admin permissions to the kube-lego:default user:
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-lego:default
(Probably not recommended for a production setup.)
The latest master contains RBAC instructions and works perfectly on 1.8.6-gke.0
I get this same error on GCP with the jetstack/kube-lego:master-4209 and kubernetes version 1.8.8-gke.0.