Failed to get kubernetes address: No kubernetes source found
Zhang21 opened this issue · 26 comments
ENV:
centos7x86_64
Linux master 3.10.0-862.9.1.el7.x86_64
k8s v1.11.1
metrics-server 0.2.x
When I run the command of README, some error messages print.
# Kubernetes > 1.8
$ kubectl create -f deploy/1.8+/
#error pod logs
I0810 06:39:00.946780 1 heapster.go:71] /metrics-server
I0810 06:39:00.946833 1 heapster.go:72] Metrics Server version v0.2.1
F0810 06:39:00.946840 1 heapster.go:79] Failed to get kubernetes address: No kubernetes source found.
#deployment status
metrics-server
Back-off restarting failed container
#container
metrics-server-f5bc46bd7-hx8dc
Back-off restarting failed container
Maybe it can’t find the api-server addr,or the authorization is forbidden?
the kube-apiserve
is running on 6443 port.
Are there have any docs i can read?
Thanks.
Same problem with me.
Same problem here with the same versions as the reporter:
centos7x86_64
Linux master 3.10.0-862.9.1.el7.x86_64
k8s v1.11.1
metrics-server 0.2.x
Same for me
Ubuntu 16.04.5 LTS
4.4.0-130-generic x86_64
kubernetes v1.11.1
metrics-server 0.2.1
same for me too.
Centos 7.5.1804
3.10.0-862.2.3.el7.x86_64
kubernetes v1.11.1
metrics-server 0.2.1
it seems because deploy/1.8+/metrics-server-deployment.yaml doesn't have any options.
I've started metrics-server by adding "source" option referenced from https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/google/heapster.yaml
+++ b/deploy/1.8+/metrics-server-deployment.yaml
@@ -31,6 +31,9 @@ spec:
- name: metrics-server
image: gcr.io/google_containers/metrics-server-amd64:v0.2.1
imagePullPolicy: Always
+ command:
+ - /metrics-server
+ - --source=kubernetes:https://kubernetes.default
volumeMounts:
- name: tmp-dir
mountPath: /tmp
maybe metrics-server needs any other options too, please make sure what kind of options have to be added.
ADDITION: this problem happens after the commit a823af8. @DirectXMan12 , would you check this problem?
@unclok bingo!
Thank you so much.
I campare deploy/1.8+/metrics-server-deployment.yaml
and https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/google/heapster.yaml
.
But just add the options you writed is running well.
here is the full deploy/1.8+/metrics-server-deployment.yaml
1 ---
2 apiVersion: v1
3 kind: ServiceAccount
4 metadata:
5 name: metrics-server
6 namespace: kube-system
7 ---
8 apiVersion: extensions/v1beta1
9 kind: Deployment
10 metadata:
11 name: metrics-server
12 namespace: kube-system
13 labels:
14 k8s-app: metrics-server
15 spec:
16 selector:
17 matchLabels:
18 k8s-app: metrics-server
19 template:
20 metadata:
21 name: metrics-server
22 labels:
23 k8s-app: metrics-server
24 spec:
25 serviceAccountName: metrics-server
26 volumes:
27 # mount in tmp so we can safely use from-scratch images and/or read-only containers
28 - name: tmp-dir
29 emptyDir: {}
30 containers:
31 - name: metrics-server
32 image: gcr.io/google_containers/metrics-server-amd64:v0.2.1
33 imagePullPolicy: Always
34 #issue-97 start
35 command:
36 - /metrics-server
37 - --source=kubernetes:https://kubernetes.default
38 #issue-97 end
39 volumeMounts:
40 - name: tmp-dir
41 mountPath: /tmp
42
@slayerjain @VTommyV @Art-Iko @vishalcs05
My problem is solved, how about you?
thanks @unclok , it worked for me too 👍
Also solved the problem for me, thank you @unclok !
thx @unclok, your solution worked for me!
At the beginning, my problem is the same as yours. According to your method, I have made such an error.
I0817 06:37:33.969360 1 heapster.go:71] /metrics-server --source=kubernetes:https://kubernetes.default
I0817 06:37:33.969429 1 heapster.go:72] Metrics Server version v0.2.1
I0817 06:37:33.969858 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version
I0817 06:37:33.969880 1 configs.go:62] Using kubelet port 10255
I0817 06:37:33.986131 1 heapster.go:128] Starting with Metric Sink
I0817 06:37:34.425236 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
F0817 06:37:34.636145 1 heapster.go:97] Could not create the API server: cluster doesn't provide requestheader-client-ca-file
@disusu
So, can i understand that your problem isn't belong to metics-server, it's belong to k8s' issues?
@unclok It fixed the problem but get error when running kubectl top pod
:
W0817 14:10:19.588853 20069 top_pod.go:263] Metrics not available for pod default/cpudemo, age: 23m34.588832802s error: Metrics not available for pod default/cpudemo, age: 23m34.588832802s
or when running kubectl top node
:
error: metrics not available yet
@michelgokan I just hit the same problem. Change the config to:
--source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
as per issue 77
@VTommyV I did and I'm getting following error: TLS handshake timeout
When I run kubectl -n kube-system logs $(kubectl get pods --namespace=kube-system -l k8s-app=metrics-server -o name)
:
I0817 13:38:18.912764 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
I0817 13:38:19.672284 1 heapster.go:72] Metrics Server version v0.2.1
I0817 13:38:19.722498 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version
I0817 13:38:19.792283 1 configs.go:62] Using kubelet port 10250
I0817 13:38:19.803056 1 heapster.go:128] Starting with Metric Sink
E0817 13:40:35.842497 1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: Get https://kubernetes.default/api/v1/nodes?resourceVersion=0: net/http: TLS handshake timeout
W0817 13:42:17.052019 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:42:46.202377 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:43:08.685236 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:43:41.224834 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:43:48.455650 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:44:11.923256 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:44:13.333695 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:44:13.982940 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:44:15.523481 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:44:19.773669 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:44:19.774235 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:44:19.775070 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:45:59.573555 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:46:00.123617 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:46:00.124209 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
I0817 13:46:06.734004 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0817 13:46:14.132591 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:46:36.772501 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:46:54.744132 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:46:55.893157 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:46:55.897497 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:47:53.123459 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:47:54.433292 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:47:58.013772 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:47:58.013787 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:49:17.142531 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:49:32.493400 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:49:33.773708 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:49:56.604009 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:50:03.564395 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:50:23.592639 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:50:45.073596 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
E0817 13:50:38.142557 1 summary.go:97] error while getting metrics summary from Kubelet worker1(172.16.16.223:10250): Get https://172.16.16.223:10250/stats/summary/: net/http: TLS handshake timeout
W0817 13:51:00.202632 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:51:00.233488 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:51:28.952855 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:51:46.423706 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:51:42.563491 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:52:14.892626 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:52:16.052116 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:52:14.934028 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:52:21.623276 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:53:03.922576 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:53:19.323795 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:53:17.264375 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:53:17.263422 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:54:02.752691 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:54:16.433503 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:54:16.926072 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:54:16.926934 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:55:05.872639 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:55:46.594022 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:55:49.926834 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:55:53.113563 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
W0817 13:56:13.292871 1 manager.go:147] Failed to get all responses in time (got 0/3)
W0817 13:56:14.218513 1 manager.go:102] Failed to get kubelet_summary:172.16.16.223:10250 response in time
W0817 13:56:14.219578 1 manager.go:102] Failed to get kubelet_summary:172.16.16.224:10250 response in time
W0817 13:56:20.393346 1 manager.go:102] Failed to get kubelet_summary:172.16.16.222:10250 response in time
I0817 13:56:54.448539 1 heapster.go:101] Starting Heapster API server...
[restful] 2018/08/17 13:57:19 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/08/17 13:57:21 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0817 13:57:21.806264 1 serve.go:85] Serving securely on 0.0.0.0:443
I0817 13:59:18.362751 1 logs.go:41] http: TLS handshake error from 10.32.0.1:34428: EOF
@VTommyV Its very strange because after a few minutes its been fixed! I didn't do anything! I just tried again after I drink a cup of coffee!
Its the magic of caffeine, it seems!
haha I think its because the metrics-server pod is respawned when you make the change. Glad its fixed before the weekend!
I just hit the same problem. I added
command:
- /metrics-server
- --source=kubernetes.summary_api:https://kubernetes.default
or
--source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
but the metric server CrashLoopBackOff...
@ratulbasak please indicate the version of metrics-server that you're using.
@ratulbasak I got the same problem today :(
NAME READY STATUS RESTARTS AGE
etcd-docker-for-desktop 1/1 Running 25 79d
kube-apiserver-docker-for-desktop 1/1 Running 28 79d
kube-controller-manager-docker-for-desktop 1/1 Running 28 79d
kube-dns-86f4d74b45-q8ktw 3/3 Running 0 79d
kube-proxy-w6zrb 1/1 Running 0 79d
kube-scheduler-docker-for-desktop 1/1 Running 14 79d
metrics-server-69b6d5fd5f-hq5k8 0/1 CrashLoopBackOff 3 1m
tiller-deploy-f9b8476d-szg2b 1/1 Running 0 79d
Any solution for this?
again, please indicate the metrics-server version
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/open
@prominatedk feed free to open new issue