alexellis/k3sup

Support issue for getting Kubeconfig file on AWS EC2 with K3s 1.22

aktiver opened this issue · 15 comments

* /usr/src/app/users/20039871/aws/.kube = <custom path to .kube folder>

After running:

k3sup install --host <ip addy> --cluster --ssh-key <pem key> --user ubuntu --k3s-channel stable --local-path <custom path to .kube folder> --context k3s --k3s-extra-args '--no-deploy traefik --write-kubeconfig <custom path to .kube folder> --write-kubeconfig-mode 644'

It returns:

Saving file to: /usr/src/app/users/20039871/aws/.kube

# Test your cluster with:
export KUBECONFIG=/usr/src/app/users/20039871/aws/.kube
kubectl config set-context k3s
kubectl get node -o wide
Error: open /usr/src/app/users/20039871/aws/.kube: is a directory

My security group for the VMs are config'd as follows:

 IPv4 | Custom UDP | UDP | 8472 | 0.0.0.0/0 | –
 IPv4 | Custom TCP | TCP | 10250 | 0.0.0.0/0 | –
 IPv4 | Custom TCP | TCP | 6443 | 0.0.0.0/0 | –
 IPv4 | SSH | TCP | 22 | 0.0.0.0/0

The above error persists and I am not sure how to fix. Ill gladly contrib money to project if you can help!

As the error says, /usr/src/app/users/20039871/aws/.kube is a folder, not a file.

/usr/src/app/users/20039871/aws/.kube/config should fix your issue

@TomTucka Thanks for pointing that out, I made the fix and got past that, and subsequently ran the following:

export KUBECONFIG=/usr/src/app/users/20039871/aws/.kube/config
kubectl config set-context my_cool_proj
kubectl get node -o wide

Which returns:

NAME              STATUS   ROLES                       AGE     VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
ip-XXX-XX-XX-XX   Ready    control-plane,etcd,master   2m37s   v1.22.5+k3s1   XXX.XX.XX.XX   <none>        Ubuntu 18.04.6 LTS   5.4.0-1060-aws   containerd://1.5.8-k3s1

I then joined the other VM to the VM k3s was installed on, as --ip is the 2nd VM, and --server-ip was the original VM k3s was installed on:

k3sup join --ip x.xxx.xxx.xx --server-ip xx.xxx.xx.xx --user ubuntu --ssh-key my_key.pem

Which returns:

Running: k3sup join
Server IP: xx.xxx.xx.xx
K101ab0af1e62cXXXXXXXXXXXXXXXXXXXXXXXe731e06294a851b85351c6aa::server:425XXXXXXXXXXXXXXXXXXXXXb64
[INFO]  Finding release for channel stable
[INFO]  Using v1.22.5+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.22.5+k3s1/sha256sum-amd64.txt
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Skipping installation of SELinux RPM
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent
Logs: Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
Output: [INFO]  Finding release for channel stable
[INFO]  Using v1.22.5+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.22.5+k3s1/sha256sum-amd64.txt
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Skipping installation of SELinux RPM
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
[INFO]  systemd: Starting k3s-agent

Then I ran:
kubectl get nodes -o wide
Which returns:

NAME              STATUS   ROLES                       AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
ip-XXX-XX-XX-XX   Ready    control-plane,etcd,master   21m   v1.22.5+k3s1   xxx.xx.xx.xx   <none>        Ubuntu 18.04.6 LTS   5.4.0-1060-aws   containerd://1.5.8-k3s1

It doesnt seem to join the two VMs
I will denote that I am installing & joining remoting using pem and kubeconfig file.

I've seen this behaviour recently, when trying to join a worker to a HA cluster

~/workspace/homelab main*
λ k get node -o wide
NAME       STATUS   ROLES                       AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
02933c66   Ready    control-plane,etcd,master   33h   v1.22.5+k3s1   10.20.40.22   <none>        Raspbian GNU/Linux 10 (buster)   5.10.52-v7l+     containerd://1.5.8-k3s1
904feab8   Ready    control-plane,etcd,master   33h   v1.22.5+k3s1   10.20.40.21   <none>        Raspbian GNU/Linux 10 (buster)   5.10.52-v7l+     containerd://1.5.8-k3s1
ef2d5f59   Ready    control-plane,etcd,master   33h   v1.22.5+k3s1   10.20.40.20   <none>        Raspbian GNU/Linux 10 (buster)   5.10.52-v7l+     containerd://1.5.8-k3s1

K3sup command

λ k3sup join --ip 10.20.40.23 --user pi --ssh-key ~/.ssh/homelab --server-ip 10.20.40.22 --server-user pi --k3s-version v1.22.5+k3s1 --k3s-extra-args '--disable traefik --disable servicelb --disable metrics-server –flannel-backend=none'

Haven't figured out why yet! Might be a bug

@alexellis - Is there a previous version I can pull with the curl -sLS https://get.k3sup.dev | sh script? (See above please).

@aktiver You can with the commands bellow, https://get.k3sup.dev just uses to the install script in the base of the repo called get.sh, I've just pulled these commands form that script 🙂

  1. curl -sSL https://github.com/alexellis/k3sup/releases/download/$version/k3sup-<your_sys_architecture> --output k3sup
  2. chmod +x k3sup
  3. sudo cp k3sup /usr/local/bin/k3sup

to get your systems current arch you can run uname -m in terminal

I’ll try this to get some progress going, which version would you recommend that you knew last worked with the join cmd?

Also, how much money do you need to fix the current version? I can drop some coin to help push this along, just let me know the amount.

You can set whatever version you like with the various version flags. See k3sup install --help and they are clearly listed.

      --k3s-channel string      Release channel: stable, latest, or i.e. v1.19 (default "v1.19")
      --k3s-version string      Set a version to install, overrides k3s-channel

I'm unclear what the issue is here?

Did K3s break the way they store kubeconfig files in K3s 1.22, or is this an issue only present with the AMI you're using on AWS EC2?

To answer the original question: the kubeconfig file is retrieved automatically.

If you've already installed k3sup, run k3sup install --skip-install to fetch it again.

To merge, see the README or k3sup install --help which explains you'll need a combination of --context NAME --merge --local-path $HOME/.kube/config

      --local-path string       Local path to save the kubeconfig file (default "kubeconfig")
      --context string          Set the name of the kubeconfig context. (default "default")
      --merge                   Merge the config with existing kubeconfig if it already exists.
                                Provide the --local-path flag with --merge if a kubeconfig already exists in some other directory

--write-kubeconfig isn't a required flag that we have, are you working on your own fork?

Also, how much money do you need to fix the current version? I can drop some coin to help push this along, just let me know the amount.

The GitHub issue you raised asked if you were a GitHub Sponsor, sponsors get priority.

/set title: Support issue for getting Kubeconfig file on AWS EC2 with K3s 1.22

/add label: support

@alexellis - just became a monthly sponsor, thank you!

Just an FYI, it didn't go through on your end. I have an openfaas function that sends notifications and it just said cancelled instead of created.

@alexellis that’s odd because my credit card was billed. What’s an email so I can send a screenshot?

**following up

Support is for sponsors, you are not showing up as a sponsor. Feel free to reach out to alex@openfaas.com

/lock