Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules
IronhandedLayman opened this issue ยท 41 comments
When installing a cluster for the first time using kubeadm
v1.6.1, the initialization defaults to setting up RBAC controlled access, which messes with permissions needed by Tiller to do installations, scan for installed components, and so on. helm init
works without issue, but helm list
, helm install
, and so on all do not work, citing some missing permission or another.
A work-around for this is to create a service account, add the service account to the tiller deployment, and bind that service account to the ClusterRole cluster-admin
. If that is how it should work out of the box, then those steps should be part of helm init
. Ideally, a new ClusterRole should be created based on the privileges of the user instantiating the Tiller instance, but that could get complicated very quickly.
At the very least, there should be some word in the documentation so that users installing helm using the instructions included within won't be wondering why they can't install anything.
Specific steps for my workaround:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl edit deploy --namespace kube-system tiller-deploy #and add the line serviceAccount: tiller to spec/template/spec
Elsewhere I proposed the ideaโcasuallyโof adding an option to helm init to allow specifying the service account name that Tiller should use.
@seh do you think that helm init
should create a default service account for Tiller, given that RBAC is becoming the default in Kubernetes (and that kubeadm
gives you no choice in the matter)?
I do think that would be useful, but a conscientious administrator is going to want to be able to override that by specifying a service account name tooโin which case we should trust that he will take care of ensuring the account exists.
In my Tiller deployment script, I do create a service account calledโbelieve it or notโ"tiller," together with a ClusterRoleBinding granting it the "cluster-admin" role (for now).
I've done a bunch of testing, now, and I agree with @seh. The right path forward seems to be create the necessary RBAC artifacts during helm init
, but give flags for overriding this behavior.
I would suggest that...
- By default, we create service account, binding, and add that to the deployment
- We add only the flag
--service-account
, which, if specified, skips creating the sa and binding, and ONLY modifies theserviceAccount
field on Tiller.
Thus, the "conscientious administrator" will be taking upon themselves the task of setting up their own role bindings and service accounts.
If we create the binding for the service account, presumably we'll create a ClusterRoleBinding granting the "cluster-admin" ClusterRole to Tiller's service account. We should document, though, that it's possible to use Tiller with more restrictive permissions, depending on what's contained in the charts you'll install. In some cases, for a namespace-local Tiller deployment, even the "edit" ClusterRole bound via RoleBinding would be sufficient.
@IronhandedLayman
Thank you for your solution! That finally made helm working with k8s 1.6.
Do you know where exactly is stored config file generated by command kubectl edit deploy --namespace kube-system tiller-deploy
?
This command opens a file, which has a line selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy
, however searching for tiller-deploy
across the whole file system returns nothing
I'm working on automated installation and trying to bake this last command into Ansible. Any advice would be appreciated! Thanks!
@MaximF kubectl edit
uses a temp file I believe for changes. It queries the API for the current content, then stores that into a temp file, opens it with $EDITOR
and when you close the file, it submits the temp file to the API and deletes the file.
If you want to keep everything in CI I suggest you just copy the deployment from the API and use kubectl apply
instead of helm init
.
Adding a temporary alternate solution for automation and @MaximF
For the Katacoda scenario (https://www.katacoda.com/courses/kubernetes/helm-package-manager), we didn't want users having to use kubectl edit
to see the benefit of Helm.
Instead, we "disable" RBAC using the command kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts;
Thanks to the Weave Cortex team for the command (cortexproject/cortex#392).
@BenHall after running that I'm getting an error like this on a step helm install PACKAGE
:
x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
@michelleN I think it makes the most sense to do this in two parts:
- Add support for
helm init --service-account= NAME
. We could try to get this into 2.3.2 to greatly ease people's pain. - Look into creating a default service account and role binding during
helm init
. That we can get into 2.4.0.
Does tiller need cluster-admin
permissions? Does it makes sense to maintain/document a least-privileged role that is specific to tiller, which only gives access to the endpoints it needs?
That depends wholly on what the charts you install try to create. If they create namespaces, ClusterRoles, and ClusterRoleBindings, then Tiller needs the "cluster-admin" role. If all it does is create, say, ConfigMaps in an existing namespace, then it could get by with much less. You have to tune Tiller to what you want to do with Tiller, or, less fruitfully, vice versa.
Ah, yes. Thanks @seh!
It will really depend on the charts as they might be creating different objects.
@seh Any chance you could whip up a quick entry in the docs/install_faq.md
to summarize the RBAC advice from above?
Helm 2.4.0 will ship (later today) with the helm init --service-account=ACCOUNT_NAME
flag, but we punted on defining a default SA/Role. That probably is something people ought to do on their own. Or at least that is our current operating assumption.
The critical parts are done. Moving to 2.4.1 to remind myself about docs.
So. Right now I am binding cluster-admin
to a serviceAccount: helm
. Should be improved, but here you go:
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: helm
namespace: kube-system
helm init --service-account helm
To automate the workaround, here's a non-interactive version of the temporary fix described in the first comment here, using patch instead of edit:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
how to find wheather rbac is enabled on k8 cluster or not . I am using the following version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
@bobbychef64 $ kubectl api-versions|grep rbac
Thanks Bregor for your reply . i executed the command the o/p is below
$ kubectl api-versions|grep rbac
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
from the above command i am thinking that rbac is enable and started to create a role but i am getting the below error.
$ kubectl create role pod-reader \
--verb=get
--verb=list
--verb=watch
--resource=pods
--namespace=ns-1
Error from server (Forbidden): roles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [] [pods] [] []}] user=&{admin admin [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]
@bobbychef64 this means your particular user has no rights to create custom roles.
As I can see, your only group in cluster is system:authenticated
. Usually this means your cluster has no authentication system (like static-tokens
, password-file
, etc) and your authorization-mode
likely is AlwaysAllow
. In this case you can't manage RBAC directly before your cluster will actually use it.
okay i understood little , the problem is i have multiple teams every one is asking the config and its having the admin username and password i am worrying if any one will change the things it will effect very badly. how can i setup authentication system .
how can i setup authentication system
@bobbychef64 This should help: https://kubernetes.io/docs/admin/authentication/
Thanks gtaylor i will check this link.
When I execute:
kubectl edit deploy --namespace kube-system tiller-deploy
I'm getting below error:
Error from server (NotFound): deployments.extensions "tiller-deploy" not found
Please help me.
@isansahoo have you run helm init
? Check out the quick start guide: https://github.com/kubernetes/helm/blob/master/docs/quickstart.md
Just realized that we don't put docs change on patch releases. So bumping to 2.5.0.
Any update on this for 2.5.0?
I just faced this again after switching from a kubeadm-controlled k8s to a kops one. Running this:
helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Then
helm install --name=traefik stable/traefik --set=rbac.enabled=true
The kubeadm-controlled cluster does not return an error, but the kops cluster immediately shows this:
Error: release traefik failed: clusterroles.rbac.authorization.k8s.io "traefik-traefik" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [
] [pods] [] []} {[get] [] [services] [] []} {[list] [] [services] [] []} {[watch] [] [services] [] []} {[get] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[ge
t] [extensions] [ingresses] [] []} {[list] [extensions] [ingresses] [] []} {[watch] [extensions] [ingresses] [] []}] user=&{system:serviceaccount:kube-system:tiller a4668563-6d50-11e7-a489-026256e9
594f [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
Can this be something to do with how kops cluster is being setup by default? Both clusters' version is
{Server Version: version.Info{Major:"1", Minor:"6",
GitVersion:"v1.6.7",GitCommit:"095136c3078ccf887b9034b7ce598a0a1faff769",GitTreeState:"clean",
BuildDate:"2017-07-05T16:40:42Z",GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I've also got this error on a kops 1.7 cluster and a minikube 1.7 cluster, both using helm version 2.5.1 on client side and tiller.
I've tried the various suggestions above regarding creating a ServiceAccount
, ClusterRoleBinding
and patching the tiller
deployment, but none of the solutions work and the error message remains the same.
traefik
and the nginx-ingress
(a local PR I'm working on) charts are exhibiting the same problem. Example error below:
Error: release nginx-ingress failed: clusterroles.rbac.authorization.k8s.io "nginx-ingress-clusterrole" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["ingresses/status"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:tiller e18d1467-7a7a-11e7-a9f3-080027e3d749 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
Is this likely due to the last line of the error message:
ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
I'm not particularly au fait with RBAC on k8s - should that role exist? Neither the nginx-ingress
or traefik
charts make mention of it and a kubectl get sa
doesn't show it in my cluster:
% kubectl get sa --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 8d
kube-public default 1 8d
kube-system default 1 8d
kube-system tiller 1 14m
nginx-ingress default 1 15m
That's not a ServiceAccount; it's a ClusterRole.
Try the following:
kubectl get clusterroles
kubectl get clusterrole cluster-admin -o yaml
Sorry @seh my bad on typing the above, I did check clusterroles
as well as serviceaccounts
.
The output shown below is from my kops 1.7 cluster, but also on my minikube 1.7 cluster the clusterrole
is absent.
% kubectl get clusterroles
NAME AGE
kopeio:networking-agent 2d
kops:dns-controller 2d
kube-dns-autoscaler 2d
% kubectl get clusterrole cluster-admin -o yaml
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "cluster-admin" not found
Do you have the RBAC authorizer activated? According to the documentation, each time the API server starts with RBAC activated, it will ensure that these roles and bindings are present.
After running helm init
, helm list
and helm install stable/nginx-ingress
caused the following errors for me in kubernentes 1.8.4:
# helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
# helm install stable/nginx-ingress
Error: no available release name found
Thanks to @kujenga! The following commands resolved the errors for me and helm list
and helm install
work fine after running the following commands:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Not working on Kubrnetes V1.9.0
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
helm list
Error: Unauthorized
helm install stable/nginx-ingress
Error: no available release name found
See my reply in #3371.
in case you run the command "kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller "
and you get error below :
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User $username cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required "container.clusterRoleBindings.create" permission.
do the following :
- gcloud container clusters describe <cluster_name> --zone
look for the password and user name in the output and copy it and then run the same command but this time with admin username and password : - kubectl --username="copied username" --password="copied password" create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
The error message broad me here
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
but actually I found a solution here
Step 1 : Get your identity
gcloud info | grep AccountWill output you something like Account: [kubectl@gserviceaccount.com]
Step 2 : grant cluster-admin to your current identity
kubectl create clusterrolebinding myname-cluster-admin-binding
--clusterrole=cluster-admin
--user=kubectl@gserviceaccount.com
In my case gcloud user (kubectl as service account ) has to be assigned with Owner privileges in the IAM console