install failed
pennpeng opened this issue · 12 comments
log:
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
[WARNING HTTPProxy]: Connection to "https://192.168.150.50" uses proxy "http://172.18.24.90:22222". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.10.0.0/16" uses proxy "http://172.18.24.90:22222". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING HTTPProxyCIDR]: connection to "10.20.0.0/16" uses proxy "http://172.18.24.90:22222". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
/bin/kubeadm [init --config=/etc/kubernetes/kubeadm.conf] err: exit status 1 4m26.70353459s
Resetting kubeadm...
/bin/kubeadm [reset --force --cri-socket=unix:///run/containerd/containerd.sock]
[preflight] running pre-flight checks
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
W0319 18:46:33.298246 6055 reset.go:213] [reset] Unable to fetch the kubeadm-config ConfigMap, using etcd pod spec as fallback: failed to get config map: Get https://192.168.150.50:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: net/http: TLS handshake timeout
/bin/kubeadm [reset --force --cri-socket=unix:///run/containerd/containerd.sock] err: <nil> 10.096034565s
Error: exit status 1
[root@kubesphere kubernetes]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─0-containerd.conf, 10-kubeadm.conf
Active: active (running) since Tue 2019-03-19 18:58:20 CST; 1s ago
Docs: https://kubernetes.io/docs/
Main PID: 7246 (kubelet)
Tasks: 18
Memory: 26.0M
CGroup: /system.slice/kubelet.service
└─7246 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.457124 7246 kubelet.go:2266] node "kubesphere" not found
Mar 19 18:58:21 kubesphere kubelet[7246]: I0319 18:58:21.459905 7246 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 19 18:58:21 kubesphere kubelet[7246]: I0319 18:58:21.461490 7246 kubelet_node_status.go:72] Attempting to register node kubesphere
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.461995 7246 kubelet_node_status.go:94] Unable to register node "kubesphere" with API server: Post https://192.168.150.50:6443/api/v1/nodes: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.557409 7246 kubelet.go:2266] node "kubesphere" not found
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.657547 7246 kubelet.go:2266] node "kubesphere" not found
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.746162 7246 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.150.50:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.746904 7246 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.150.50:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubesphere&limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.747999 7246 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.150.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubesphere&limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.757721 7246 kubelet.go:2266] node "kubesphere" not found
It seems in the log that you have set up HTTPProxy:
[WARNING HTTPProxy]: Connection to "https://192.168.150.50" uses proxy "http://172.18.24.90:22222". If that is not intended, adjust your proxy settings.
And your Kubelet error shows that it can't connect (TCP refuse) to the API server:
Mar 19 18:58:21 kubesphere kubelet[7246]: E0319 18:58:21.746162 7246 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.150.50:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Can you please share your proxy environment configuration:
env | grep -i _proxy= | sort
NO_PROXY
must contain the ip ranges defined with --kubernetes-infrastructure-cidr
--kubernetes-pod-network-cidr
and --kubernetes-service-cidr
. For the default values this would be the following:
export NO_PROXY="192.168.64.0/20,10.10.0.0/16,10.20.0.0/16,.kubernetes,.kubernetes.default,.kubernetes.default.svc,.kubernetes.default.svc.cluster.local"
Please modify NO_PROXY for you network configuration accordingly.
why pull k8s image???
dial tcp 108.177.97.82:443: i/o timeout
Mar 21 11:06:12 localhost kubelet: E0321 11:06:12.212869 6572 pod_workers.go:190] Error syncing pod 6f947a25aaf142618e41c0a9b56040f4 ("kube-controller-manager-kubesphere_kube-system(6f947a25aaf142618e41c0a9b56040f4)"), skipping: failed to "CreatePodSandbox" for "kube-controller-manager-kubesphere_kube-system(6f947a25aaf142618e41c0a9b56040f4)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-controller-manager-kubesphere_kube-system(6f947a25aaf142618e41c0a9b56040f4)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to resolve image \"k8s.gcr.io/pause:3.1\": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout"
Mar 21 11:06:12 localhost kubelet: E0321 11:06:12.213333 6572 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to resolve image "k8s.gcr.io/pause:3.1": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout
Mar 21 11:06:12 localhost kubelet: E0321 11:06:12.213375 6572 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-apiserver-kubesphere_kube-system(dcc44bcb9fab97d23ea76d9175babe47)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to resolve image "k8s.gcr.io/pause:3.1": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout
Mar 21 11:06:12 localhost kubelet: E0321 11:06:12.213388 6572 kuberuntime_manager.go:662] createPodSandbox for pod "kube-apiserver-kubesphere_kube-system(dcc44bcb9fab97d23ea76d9175babe47)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to resolve image "k8s.gcr.io/pause:3.1": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout
Mar 21 11:06:12 localhost kubelet: E0321 11:06:12.213455 6572 pod_workers.go:190] Error syncing pod dcc44bcb9fab97d23ea76d9175babe47 ("kube-apiserver-kubesphere_kube-system(dcc44bcb9fab97d23ea76d9175babe47)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-kubesphere_kube-system(dcc44bcb9fab97d23ea76d9175babe47)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-apiserver-kubesphere_kube-system(dcc44bcb9fab97d23ea76d9175babe47)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to resolve image \"k8s.gcr.io/pause:3.1\": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout"
[root@kubesphere kubelet.service.d]#
]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/banzaicloud/coredns 1.2.6 f59dcacceff45 12.2MB
docker.io/banzaicloud/etcd 3.2.24 3cab8e1b9802c 63.4MB
docker.io/banzaicloud/hyperkube v1.13.3 68b0696339174 180MB
docker.io/banzaicloud/pause 3.1 da86e6ba6ca19 326kB
The kubelet uses the hard coded pause container during the initialization of pods. https://github.com/kubernetes/kubernetes/search?q=PodSandboxImage&unscoped_q=PodSandboxImage
#14 will address this.
Please note later on weave network will be installed, that will add two more images.
# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/banzaicloud/auto-approver 0.1.0 085ce11431983 16MB
docker.io/banzaicloud/coredns 1.2.6 f59dcacceff45 12.2MB
docker.io/banzaicloud/etcd 3.2.24 3cab8e1b9802c 63.4MB
docker.io/banzaicloud/hyperkube v1.13.3 68b0696339174 180MB
docker.io/banzaicloud/pause 3.1 da86e6ba6ca19 326kB
docker.io/weaveworks/weave-kube 2.5.1 1f394ae9e2260 40.5MB
docker.io/weaveworks/weave-npc 2.5.1 789b7f4960344 13.5MB
same problem
[root@kubesphere kubernetes]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─0-containerd.conf, 10-kubeadm.conf
Active: active (running) since Fri 2019-03-22 17:43:30 CST; 1min 55s ago
Docs: https://kubernetes.io/docs/
Main PID: 10504 (kubelet)
Tasks: 19
Memory: 30.3M
CGroup: /system.slice/kubelet.service
└─10504 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --anonymous-auth=false --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --protect-kernel-defaults=true --read-only-port=0 --rotate-certificates=true --streaming-connection-idle-timeout=5m --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
Mar 22 17:45:25 kubesphere kubelet[10504]: E0322 17:45:25.956897 10504 pod_workers.go:190] Error syncing pod 304f9e65b941638069f0ef000250eccd ("kube-scheduler-kubesphere_kube-system(304f9e65b941638069f0ef000250eccd)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-kubesphere_kube-system(304f9e65b941638069f0ef000250eccd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-scheduler-kubesphere_kube-system(304f9e65b941638069f0ef000250eccd)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to resolve image \"k8s.gcr.io/pause:3.1\": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout"
Mar 22 17:45:25 kubesphere kubelet[10504]: E0322 17:45:25.956904 10504 pod_workers.go:190] Error syncing pod 26a4d4d11a44d508fcc89c82c268853e ("etcd-kubesphere_kube-system(26a4d4d11a44d508fcc89c82c268853e)"), skipping: failed to "CreatePodSandbox" for "etcd-kubesphere_kube-system(26a4d4d11a44d508fcc89c82c268853e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"etcd-kubesphere_kube-system(26a4d4d11a44d508fcc89c82c268853e)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to resolve image \"k8s.gcr.io/pause:3.1\": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp 108.177.97.82:443: i/o timeout"
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.038783 10504 kubelet.go:2266] node "kubesphere" not found
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.041989 10504 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.150.50:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.042844 10504 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.150.50:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubesphere&limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.043997 10504 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.150.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubesphere&limit=500&resourceVersion=0: dial tcp 192.168.150.50:6443: connect: connection refused
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.139062 10504 kubelet.go:2266] node "kubesphere" not found
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.146355 10504 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.239273 10504 kubelet.go:2266] node "kubesphere" not found
Mar 22 17:45:26 kubesphere kubelet[10504]: E0322 17:45:26.339515 10504 kubelet.go:2266] node "kubesphere" not found
[root@kubesphere kubernetes]# pke version
kubeadm version: struct { cmd.ClientVersion "json:\"clientVersion\"" }{ClientVersion:cmd.ClientVersion{GitVersion:"0.2.1", GitCommit:"8027e50", GitTreeState:"", BuildDate:"2019-03-21T15:31:28Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}}
[root@kubesphere kubernetes]#
---
[controlplane] Adding extra host path mount "admission-control-config-dir" to "kube-apiserver"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
/bin/kubeadm [init --config=/etc/kubernetes/kubeadm.conf] err: exit status 1 4m4.018936803s
Resetting kubeadm...
/bin/kubeadm [reset --force --cri-socket=unix:///run/containerd/containerd.sock]
[preflight] running pre-flight checks
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
W0322 17:47:24.430461 10667 reset.go:213] [reset] Unable to fetch the kubeadm-config ConfigMap, using etcd pod spec as fallback: failed to get config map: Get https://192.168.150.50:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.150.50:6443: connect: connection refused
/bin/kubeadm [reset --force --cri-socket=unix:///run/containerd/containerd.sock] err: <nil> 78.772284ms
and the crictl has no tag options,i can't retag docker.io/banzaicloud/pause to k8s......
It seems kube-apiserver exited with error.
List kube-apiserver and look for CONTAINER ID.
$ crictl ps -a -l name=kube-apiserver -n 1
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
05ab01a60ec81 f59dcacceff45 3 minutes ago Running coredns 0 c03e7623ba8d9
Get container logs for kube-apiserver. Use CONTAINER ID from previous command.
crictl logs 05ab01a60ec81
And paste the logs here.
Also please do a clean install just to be sure that containerd's new configuration takes place.
there are no image:"k8s.gcr.io/pause:3.1\
[root@kubesphere ~]# crictl ps -a -l name=kube-apiserver -n 1
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
[root@kubesphere ~]# crictl ps -a -l name=kube-apiserver -n 1
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
[root@kubesphere ~]# crictl ps -a -l name=kube-apiserver -n 1
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
[root@kubesphere ~]# crictl ps -a -l name=kube-apiserver -n 1
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
[root@kubesphere ~]#
Please do a clean install from scratch (no containerd installed on host machine).
Use 0.2.1 version:
curl -v https://banzaicloud.com/downloads/pke/pke-0.2.2 -o /usr/local/bin/pke
and verify the right PKE version is being used:
chmod +x /usr/local/bin/pke
/usr/local/bin/pke version -o yaml
clientVersion:
buildDate: "2019-03-26T09:11:42Z"
compiler: gc
gitCommit: 13ff571
gitTreeState: ""
gitVersion: 0.2.2
goVersion: go1.12.1
platform: linux/amd64
gitVersion must be 0.2.2.
Closing due to inactivity.