giantswarm/prometheus

On kubernetes 1.7.3 the data is null

canghai908 opened this issue · 12 comments

When I use this on kubrenetes 1.7.3 (installd by kubeadm),The data is null. What heappend?
image
and the cluster status is null
image

my envirtmonent:
os:centos 7.3 x_641611
kubernets:

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T06:43:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

kubectl get pod --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
default default-http-backend-3515556356-sn52n 1/1 Running 1 3d
devo devo-ui-c8bsx 1/1 Running 1 2d
devo facp-controller-vbv46 1/1 Running 0 1d
kube-system etcd-node147 1/1 Running 1 3d
kube-system heapster-3904197848-8h054 1/1 Running 0 2d
kube-system kube-apiserver-node147 1/1 Running 1 3d
kube-system kube-controller-manager-node147 1/1 Running 1 3d
kube-system kube-dns-2425271678-050lp 3/3 Running 3 3d
kube-system kube-flannel-ds-0lsvg 2/2 Running 3 3d
kube-system kube-flannel-ds-czmtc 2/2 Running 3 3d
kube-system kube-proxy-88q3w 1/1 Running 1 3d
kube-system kube-proxy-jd8gr 1/1 Running 2 3d
kube-system kube-scheduler-node147 1/1 Running 1 3d
kube-system kubernetes-dashboard-3313488171-lk1m7 1/1 Running 0 2d
kube-system monitoring-grafana-2027494249-1nlh2 1/1 Running 0 2d
kube-system monitoring-influxdb-3487384708-30jfj 1/1 Running 0 2d
monitoring alertmanager-4158139002-92636 1/1 Running 0 2h
monitoring grafana-core-1069951769-v3895 1/1 Running 0 2h
monitoring kube-state-metrics-654070635-lnt29 1/1 Running 0 1h
monitoring kube-state-metrics-654070635-nzh0l 1/1 Running 0 1h
monitoring node-directory-size-metrics-s08vh 2/2 Running 0 1h
monitoring node-directory-size-metrics-vbw4v 2/2 Running 0 1h
monitoring prometheus-core-669051596-68rgp 1/1 Running 0 50m
monitoring prometheus-node-exporter-gd33b 1/1 Running 0 1h
monitoring prometheus-node-exporter-mxgn5 1/1 Running 0 1h
nginx-ingress nginx-ingress-controller-2029042266-gfrzt 1/1 Running 1 3d
nginx-ingress nginx-ingress-controller-2029042266-vzlt6 1/1 Running 2 2d

same. here is the log:
Error from server (BadRequest): a container name must be specified for pod node-directory-size-metrics-bjzdv, choose one of: [read-du caddy]

configuration is wrong?

k8s version 1.7 ,metrics not data for dashboard ! this is bug ?

More than 1.7 k8s of grafana do not have data, hope to be able to fix it, thanks!

I have slove this, I add this in configmap on prometheus-core

-  job_name: 'kubernetes-cadvisor'
    scheme: http
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    kubernetes_sd_configs:
      - role: node
    relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:4194'
        target_label: __address__

Is that the word 'address' or did you put the address of something there?

@arrkaye I just edited his comment to use a code block, I think __address__ is a keyword here. I will look into it later today to hopefully merge a fix.

i update my cluster to v1.7.5 ,still get some problem on get metrics from prometheus

i add the following content to prometheus.yaml as document says

      - job_name: 'kubernetes-cadvisor'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: [__meta_kubernetes_node_name]
            regex: (.+)
            target_label: __metrics_path__
            replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

but prometheus get error info "server returned HTTP status 403 Forbidden", the "Kubernetes Pods resource" panel still no data

justlooks fix worked for me. I have data in my dashboard now. Running kubernetes 1.7.5

But I can't get metric data on kubernetes 1.8.2,so why? @simon-k8s @canghai908

@like-inspur did you add the content that @justlooks suggested to the prometheus.yaml?

@simon-k8s OK ,it works, thanks!

It should be ok with the current master. Feel free to reopen if it still happens to you