rimusz/coreos-kubernetes-cluster-osx

Waiting for kubernetes cluster to be ready...(never ends)

v1k0d3n opened this issue · 16 comments

has anyone run into this issue also?

Installing fleet units from '~/coreos-k8s-cluster/fleet' folder:
Triggered global unit kube-kubelet.service start
Triggered global unit kube-proxy.service start
Unit fleet-ui.service launched
Unit kube-apiserver.service launched on 07dd1cda.../172.17.15.101
Unit kube-controller-manager.service launched on 07dd1cda.../172.17.15.101
Unit kube-scheduler.service launched on 07dd1cda.../172.17.15.101
Finished installing fleet units
UNIT                MACHINE             ACTIVE      SUB
fleet-ui.service        07dd1cda.../172.17.15.101   activating  start-pre
kube-apiserver.service      07dd1cda.../172.17.15.101   active      running
kube-controller-manager.service 07dd1cda.../172.17.15.101   active      running
kube-kubelet.service        01735c8a.../172.17.15.102   active      running
kube-kubelet.service        4814d900.../172.17.15.103   active      running
kube-proxy.service      01735c8a.../172.17.15.102   active      running
kube-proxy.service      4814d900.../172.17.15.103   active      running
kube-scheduler.service      07dd1cda.../172.17.15.101   active      running

Waiting for Kubernetes cluster to be ready. This can take a few minutes...
\

this never ends up going away. is there any way to debug this a little more and figure out what's going on in the background? there's probably something easy i'm overlooking. are you going to move away from virtualbox and go with xhyve like what you've done with kube-Solo? kube-Solo seems like a really awesome approach.

@v1k0d3n just fleet units status
I'm thinking to move to xhyve this one too, it is a bit tricky to run 3 VMs, the problem is to catch all VMs IPs properly, but I have some ideas :)

that would be awesome; i really like the approach. do you get the same issue when bringing up your cluster (the hanging)?

checking the problem on the fresh cluster

no, did not get any problem there. can you check the fleet units, please?

little bit better (as far as getting more error information?)...i suspended and started again since we were going back and forth, and i'm getting an error:

Waiting for Kubernetes cluster to be ready. This can take a few minutes...
error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout

the fleet-ui reports:
172.17.15.101 role=control
|-> fleet-ui = running
|-> kube-apiserver = running
|-> kube-controller-manager = running
|-> kube-scheduler = running

172.17.15.102 role=node
|-> kube-kubelet = running
|-> kube-proxy = running

172.17.15.103 role=node
|-> kube-kubelet = running
|-> kube-proxy = running

your kubectl config file points to master 10.245.1.2, this is causing the problem for the local setup.
I already have a work around this problem on kube-solo, need to port it to this app too

ah, that's true. in general, there should be a bit better way to manage the kubectl config file. i forget how many environments i've tested and used over the past few months. i start to loose track...thanks for pointing me back to this. i should've caught it before now!

new released version v0.5.5 has a fix for kubectl to connect to the proper cluster

for some reason the new version is still having this issue. i removed all references to the kubectl config that I once had:

nickburns-mac:~ v1k0d3n$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
nickburns-mac:~ v1k0d3n$

and it is still hanging for some reason. i reviewed the src and it appears that something may be getting botched up at the EXPORT section during first-init.command?

nickburns-mac:~ v1k0d3n$ kubectl get nodes
error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused

nickburns-mac:~ v1k0d3n$ export KUBERNETES_MASTER=http://172.17.15.101:8080
nickburns-mac:~ v1k0d3n$ kubectl get nodes
NAME            LABELS                                 STATUS
172.17.15.102   kubernetes.io/hostname=172.17.15.102   Ready
172.17.15.103   kubernetes.io/hostname=172.17.15.103   Ready
nickburns-mac:~ v1k0d3n$

like i said, i've never programmed anything substantial in my life, but i'm trying to learn as much as i can. could this be the source of the issue?

v1k

ah, my bad left one more point to my kube-solo App folder there
Will fix it on Monday

I have released v0.5.6 which enforces to use the ~/coreos-k8s-cluster/control\kubeconfig but you must use the shell opened via Up or OS Shell from App's menu.

yup, that fixed it too. noticed this at some point. thanks again...this tool is awesome!

os_shell.command line 33:
-sexport KUBERNETES_MASTER=http://172.17.15.101:8080
+export KUBERNETES_MASTER=http://172.17.15.101:8080

cool, then. I noticed that spelling error too, will fix it in the next version, kubectl uses kubeconfig file so that KUBERNETES_MASTER=http://172.17.15.101:8080 env variable is not used anymore

released v0.5.7 with that spelling error fix

ha! man, you're turning these out fast. seems like i'm not the only one who tries to catch up over the weekends. thanks for the updates! i really like your projects a lot. 👍

I do not like loose ends, specially the ones I can fix quickly.
Thanks for liking my projects, would you mind to star them, please?
And some tweets would be nice too :) so more guys can use it