Testing case - Initialize kubeadm cluster master
pablodav opened this issue · 0 comments
pablodav commented
After some fixes (not yet requested to merge here):
https://github.com/pablodav/kubernetes-for-windows/commits/feature/testing-win1709
I have successfully ran install playbook.
Now testing create cluster playbook.
I see it just stuck at:
TASK [ubuntu/kubernetes-master : Initialize kubeadm cluster master] **************
I'm not sure why it just stays there without any error, the docker ps in master shows:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fb062e31a703 e03746fe22c3 "kube-apiserver --..." 54 seconds ago Up 53 seconds k8s_kube-apiserver_kube-apiserver-l240lnx2101_kube-system_e8024548793f559c77d1ad102f084506_3
5f2ee0caf004 k8s.gcr.io/kube-controller-manager-amd64@sha256:98a3a7dc4c6c60dbeb0273302d697edaa89bd10fceed87ad5144c0b0acc5cced "kube-controller-m..." 7 minutes ago Up 7 minutes k8s_kube-controller-manager_kube-controller-manager-l240lnx2101_kube-system_853198beb4e440f4ec7c956092553e14_0
2265a3925f15 k8s.gcr.io/kube-scheduler-amd64@sha256:4770e1f1eef2229138e45a2b813c927e971da9c40256a7e2321ccf825af56916 "kube-scheduler --..." 7 minutes ago Up 7 minutes k8s_kube-scheduler_kube-scheduler-l240lnx2101_kube-system_1fdcffa6db578101b0946acfc07f2bc4_0
b90eacb2d8d8 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_etcd-l240lnx2101_kube-system_8851929b5482d8060ad5c851bd3a8dee_0
5d60fc2877aa k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-apiserver-l240lnx2101_kube-system_e8024548793f559c77d1ad102f084506_0
2365e76eae19 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-controller-manager-l240lnx2101_kube-system_853198beb4e440f4ec7c956092553e14_0
bfaf0852abf7 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-scheduler-l240lnx2101_kube-system_1fdcffa6db578101b0946acfc07f2bc4_0
and ps output:
ps -aux | grep kubeadm
root 10661 0.0 0.0 4504 696 ? S 10:52 0:00 /bin/sh -c kubeadm init --service-cidr "10.201.0.0/16" --pod-network-cidr "10.200.0.0/16" --node-name "l240lnx2101"
root 10662 0.8 1.4 114748 60560 ? Sl 10:52 0:06 kubeadm init --service-cidr 10.201.0.0/16 --pod-network-cidr 10.200.0.0/16 --node-name l240lnx2101
I'm just testing with this role, will se what I can do to fix or change something here...