k3s-io/k3s

/etc/rancher/k3s/k3s.yaml is world readable

mdempsky opened this issue Β· 33 comments

Installing k3s via get.k3s.io creates a world readable /etc/rancher/k3s/k3s.yaml file, which appears to contain a plain text admin password.

Yeah, we actually did this on purpose but I can see how people wouldn't like it. You can change the file mode of the kubeconfig as a parameter in k3s server. I think the best approach would probably be to create a k3s group and prompt the user to run usermod like docker installation does. The way it's done now was to avoid issue where people install and then can't access kubernetes because they are root. In that situation kubectl gives a useless error saying it can't connect to port 8080.

I think the best approach would probably be to create a k3s group and prompt the user to run usermod like docker installation does.

That sounds reasonable to me.

The way it's done now was to avoid issue where people install and then can't access kubernetes because they are root.

Did you mean "unless they are root"? (Just making sure I understand your explanation.)

I think what I'd like to do here is make the file not world readable and then change the kubectl wrapper code in k3s to try to read /etc/rancher/k3s/k3s.yaml and if it's not accesible issue a warning. kubectl might still fail but it will at least help the user to know that maybe they need to run as root. In the warning message we can indicate the server can be launch with --write-kubeconfig-mode to change the permission.

Version - v0.6.0-rc3
Verified fixed

For most installations now all kubectl commands have to be executed with root access or increase the privileges of /etc/rancher/k3s/k3s.yaml, this was really the intended behavior? I guess most users will want to do that.

Also, there seems to be no documentation around --write-kubeconfig-mode, so I don't know how to use it. How do I use that flag?

Same here. I was caught unaware and updated one of my clusters, and --write-kubeconfig-mode is not documented. Should I specify it during the initial install, edit the systemd unit... what?

Hi all, I found how to use the new flag:

  • using --write-kubeconfig-mode 644
$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
  • using the variable K3S_KUBECONFIG_MODE
$ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -

I think this broke the quick start on https://k3s.io/

curl -sfL https://get.k3s.io | sh -
[INFO] Finding latest release
[INFO] Using v0.7.0 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.7.0/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.7.0/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service β†’ /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
tim@tim-GE72MVR-7RG:$ k3s kubectl get node
WARN[2019-08-05T00:47:53.065227180-07:00] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions
error: Error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied
tim@tim-GE72MVR-7RG:
$ sudo k3s server &

For others struggling with this still (when using the quick run install script on CentOS 7 like me):
curl -sfL https://get.k3s.io | sh -

The command as-is installs fine, but kubectl won't work without using sudo. However, default sudo setup in CentOS 7 does not let you use the default kubectl path. As noted by @mattiaperi above, you can use the --write-kubeconfig-mode 644 trick during install, but this then leaves the file w/the admin stuff world readable.

My solution was to install via default method, and just use visudo to edit the secure_path variable to include /usr/local/bin

Seems to be working fine.

Note that if you just want to allow access in an already existing install you can edit the k3s.service.env file in /etc/systemd/system to contain the environment variable mentioned above: K3S_KUBECONFIG_MODE="644"

the above failed to for me on exsting cluster.. i just updated the permission on the file and it seemed to do the trick /etc/rancher/k3s $ sudo chmod 777 k3s.yaml

You do have to restart the service if you are trying to change the perms via env variable or command line option. It rewrites the file and sets permissions on startup.

Just do: sudo chmod 644 /etc/rancher/k3s/k3s.yaml would be cool to be documented

Kxrr commented

For who installed k3s with binary, using k3s server --write-kubeconfig-mode 644 to start the server fixed it.

Run this as root on existing installation (Ubuntu 20.04 LTS)

echo "K3S_KUBECONFIG_MODE=\"644\"" >> /etc/systemd/system/k3s.service.env

the above failed to for me on exsting cluster.. i just updated the permission on the file and it seemed to do the trick /etc/rancher/k3s $ sudo chmod 777 k3s.yaml

One more option to install and set flag --write-kubeconfig-mode

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -

If you're using k3sup to install k3s, add this flag to your install --k3s-extra-args '--write-kubeconfig-mode 664'.

Note: You may run into issues if you use this on your agents, I hit this error: flag provided but not defined: -write-kubeconfig-mode

jr200 commented

Adding a comment to try and stop users unwittingly using these 644 hacks, as it undermines the point of this Issue+PR.

If you really do need admin kubectl access, then copy /etc/rancher/k3s/k3s.yaml to your user area ~/.kube/config (permissions 600), and if necessary, add export KUBECONFIG=~/.kube/config to your ~/.bashrc.

Based on @jr200 comment, here's a one liner that copies the k3s.yaml (system level kubeconfig) to one under ~/.kube - it uses a name specific to k3s in case you have an existing kubeconfig file pointing to other K8s clusters:

sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/k3s-config && sudo chown $USER: ~/.kube/k3s-config && export KUBECONFIG=~/.kube/k3s-config

This preserves the existing 600 permissions of k3s.yaml.

If you prefer a single kubeconfig, you can merge k3s-config into the default ~/.kube/config and do unset KUBECONFIG to use that.

If you have no existing ~/kube/config you can do this instead:

sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config && sudo chown $USER: ~/.kube/config && unset KUBECONFIG

One more option to install and set flag --write-kubeconfig-mode

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -

worked for me. Thanks @Panoptik

curl -sfL https://get.k3s.io | sh -
# Check for Ready node, takes maybe 30 seconds
k3s kubectl get node

It's frustrating to see this on the landing page of k3s.io - This won't take long… oooookay.

You could find many results based on the error error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied. Really confusing. First step after the install script and that thing won't work.
I guess many users ran into this.

The setup procedure should be updated. Otherwise it won't work just out of the box.

The very basic quick start instructions do assume you're root. Some folks want it world readable, others don't - the latter makes more sense from a security perspective. Hopefully anyone attempting to use Kubernetes knows how to deal with file permissions or use sudo.

The more common solution for this problem by the installer - real fix to me:

/etc/systemd/system/k3s.service.env:
K3S_KUBECONFIG_MODE="644"

/etc/profile.d/k3s.sh
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

/etc/profile.d/k3s.csh
setenv KUBECONFIG /etc/rancher/k3s/k3s.yaml

done.

I agree with @jr200 so I used the @rdonkin method to copy and change ownership permission of the k3s.yml to $HOME/.kube/config.
On my machine, for some reason, I also had to add this to my .bashrc (without it kubectl used /etc/rancher/k3s/k3s.yml by default)

export KUBECONFIG=~/.kube/config

IMHO k3s is designed to be used in the deployment of kubernetes clusters in many different environments (cloud, bare-metal, raspberry pi). This default behavior fits well with this usage. For a k3s local development environment, rancher-desktop is pretty easy to get up and running, but I dont like using the kvm intermediary on linux (its great on mac and windows), and, at least for now it does not allow access to Ingress via localhost (it does on mac and windows) which I raised an issue about in the rancher-desktop github repo. I would prefer to be able to set up a native k3s for local development on linux which allows localhost access to Ingress/

I think the best approach would probably be to create a k3s group and prompt the user to run usermod like docker installation does.

This suggestion seems to have been forgotten, but I can confirm that doing this post-installation works just fine (all commands run as root):

$ groupadd k3s
$ usermod -aG k3s pi
$ chgrp k3s /etc/rancher/k3s/k3s.yaml
$ chmod 660 /etc/rancher/k3s/k3s.yaml

EDIT: Cancel that - apparently the permissions of /etc/rancher/k3s/k3s.yaml are reset on reboot.

I think the best approach would probably be to create a k3s group and prompt the user to run usermod like docker installation does.

This suggestion seems to have been forgotten, but I can confirm that doing this post-installation works just fine (all commands run as root):

$ groupadd k3s
$ usermod -aG k3s pi
$ chgrp k3s /etc/rancher/k3s/k3s.yaml
$ chmod 660 /etc/rancher/k3s/k3s.yaml

EDIT: Cancel that - apparently the permissions of /etc/rancher/k3s/k3s.yaml are reset on reboot.

@scubbo I found your solution works just fine, except that I am using the solution in the comment above (#389 (comment)) to change file permissions with flag --write-kubeconfig-mode or K3S_KUBECONFIG_MODE, which is not reset on reboot.

Just to note a sister issue with this: In Ubuntu, apparently the default security settings won't allow keeping the permissions for the file set to 644. If I set sudo chmod 644 /etc/rancher/k3s/k3s.yaml, the permissions will be changed, but they quickly and spontaneously revert back to 600. Historically, it gave you time to do some work before this happens, but now, you set it, run a command like kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml, and it fails out halfway through, because it reverts the permissions before it is done applying the updates to etcd.

I don't claim to know what to do about this one, but it may be worth an entry in the docs if an Ubuntu SME has any insight.

the permissions are only reset to whatever mode is set by --write-kubeconfig-mode when k3s starts. Is your node perhaps crashlooping due to some other error?

I noticed one thing just today. This only happens on under-resourced agents and servers. I'm developing an AI system and am resource constrained. I see that when the VMs hosting servers and agents are at 80%+ memory pressure and other saturation, this issue happens repeatedly, but if I over-provision the VMs, this doesn't happen at all.

So yeah, the systems are under-resourced and k3s is crashing and restarting?

Other than latency and the file permissions of this file changing, I never say saw evidence of the agent or server restarting, but I suppose this is what was happening. I will remember the flag --write-kubeconfig-mode mentioned in the conversation and set that to 644 moving forward.

Sorry to take up the team's time on this one, but hopefully this thread is helpful to others in the future.