kubernetes-retired/heapster

404 Not Found with influxdb-grafana-controller.yaml

Closed this issue · 18 comments

I create pods and services using files in heapster/deploy/kube-config/influxdb/.
API objects are created and look fine.

$ ./kubectl get po
NAME                                  READY     STATUS    RESTARTS   AGE
elasticsearch-logging-v1-7kbw4        1/1       Running   0          7d
elasticsearch-logging-v1-q7vyj        1/1       Running   0          7d
fluentd-elasticsearch-10.128.112.21   1/1       Running   2          7d
fluentd-elasticsearch-10.128.112.22   1/1       Running   1          7d
fluentd-elasticsearch-10.128.112.23   1/1       Running   1          7d
heapster-u0qom                        1/1       Running   2          1h
influxdb-grafana-u0g2t                2/2       Running   0          25m
kibana-logging-v1-z9rhh               1/1       Running   0          7d
kube-dns-v9-hjmjd                     4/4       Running   1          7d
kube-ui-v3-tzr4h                      1/1       Running   0          2h
kubedash-8icur                        1/1       Running   0          7d
$ ./kubectl get se
NAME                    LABELS                                                                                              SELECTOR                        IP(S)          PORT(S)
elasticsearch-logging   k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch   k8s-app=elasticsearch-logging   11.0.153.232   9200/TCP
heapster                kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster                                      k8s-app=heapster                11.0.131.31    80/TCP
kibana-logging          k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana                 k8s-app=kibana-logging          11.0.174.16    5601/TCP
kube-dns                k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS                      k8s-app=kube-dns                11.0.0.10      53/UDP
                                                                                                                                                                           53/TCP
kube-ui                 k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI                        k8s-app=kube-ui                 11.0.236.42    80/TCP
kubedash                name=kubedash                                                                                       name=kubedash                   11.0.247.134   80/TCP
monitoring-grafana      kubernetes.io/cluster-service=true,kubernetes.io/name=monitoring-grafana                            name=influxGrafana              11.0.161.217   80/TCP
monitoring-influxdb     <none>                                                                                              name=influxGrafana              11.0.46.23     8083/TCP
                                                                                                                                                                           8086/TCP
$ ./kubectl cluster-info
Kubernetes master is running at https://10.128.112.11
Elasticsearch is running at https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/kube-ui
monitoring-grafana is running at https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana

In a browser, I try to access below url without success. (I can access other services without problem.)
https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

Grafana logs are shown below.

$ ./kubectl logs -f influxdb-grafana-u0g2t grafana
Influxdb service URL is provided.
Using the following URL for InfluxDB: http://monitoring-influxdb:8086
Using the following backend access mode for InfluxDB: proxy
Starting Grafana in the background
Waiting for Grafana to come up...
2015/10/16 03:59:23 [I] Starting Grafana
2015/10/16 03:59:23 [I] Version: 2.1.0, Commit: v2.1.0, Build date: 2015-08-04 14:19:48 +0000 UTC
2015/10/16 03:59:23 [I] Configuration Info
Config files:
  [0]: /usr/share/grafana/conf/defaults.ini
  [1]: /etc/grafana/grafana.ini
Command lines overrides:
  [0]: default.paths.data=/var/lib/grafana
  [1]: default.paths.logs=/var/log/grafana
    Environment variables used:
  [0]: default.paths.data=/var/lib/grafana
  [1]: default.paths.logs=/var/log/grafana
Paths:
  home: /usr/share/grafana
  data: /var/lib/grafana
  logs: /var/log/grafana

2015/10/16 03:59:23 [I] Database: sqlite3, ConnectionString: file:/var/lib/grafana/grafana.db?cache=shared&mode=rwc&_loc=Local
2015/10/16 03:59:23 [I] Migrator: Starting DB migration
2015/10/16 03:59:23 [I] Migrator: exec migration id: create migration_log table
2015/10/16 03:59:23 [I] Migrator: exec migration id: create user table
2015/10/16 03:59:23 [I] Migrator: exec migration id: add unique index user.login
2015/10/16 03:59:23 [I] Migrator: exec migration id: add unique index user.email
2015/10/16 03:59:23 [I] Migrator: exec migration id: drop index UQE_user_login - v1
2015/10/16 03:59:23 [I] Migrator: exec migration id: drop index UQE_user_email - v1
2015/10/16 03:59:23 [I] Migrator: exec migration id: Rename table user to user_v1 - v1
2015/10/16 03:59:23 [I] Migrator: exec migration id: create user table v2
2015/10/16 03:59:24 [I] Migrator: exec migration id: create index UQE_user_login - v2
2015/10/16 03:59:24 [I] Migrator: exec migration id: create index UQE_user_email - v2
2015/10/16 03:59:24 [I] Migrator: exec migration id: copy data_source v1 to v2
2015/10/16 03:59:24 [I] Migrator: exec migration id: Drop old table user_v1
2015/10/16 03:59:24 [I] Migrator: exec migration id: create star table
2015/10/16 03:59:24 [I] Migrator: exec migration id: add unique index star.user_id_dashboard_id
2015/10/16 03:59:24 [I] Migrator: exec migration id: create org table v1
2015/10/16 03:59:24 [I] Migrator: exec migration id: create index UQE_org_name - v1
2015/10/16 03:59:24 [I] Migrator: exec migration id: create org_user table v1
2015/10/16 03:59:24 [I] Migrator: exec migration id: create index IDX_org_user_org_id - v1
2015/10/16 03:59:24 [I] Migrator: exec migration id: create index UQE_org_user_org_id_user_id - v1
2015/10/16 03:59:24 [I] Migrator: exec migration id: copy data account to org
2015/10/16 03:59:24 [I] Migrator: skipping migration id: copy data account to org, condition not fulfilled
2015/10/16 03:59:24 [I] Migrator: exec migration id: copy data account_user to org_user
2015/10/16 03:59:24 [I] Migrator: skipping migration id: copy data account_user to org_user, condition not fulfilled
2015/10/16 03:59:24 [I] Migrator: exec migration id: Drop old table account
2015/10/16 03:59:24 [I] Migrator: exec migration id: Drop old table account_user
2015/10/16 03:59:24 [I] Migrator: exec migration id: create dashboard table
2015/10/16 03:59:24 [I] Migrator: exec migration id: add index dashboard.account_id
2015/10/16 03:59:24 [I] Migrator: exec migration id: add unique index dashboard_account_id_slug
2015/10/16 03:59:24 [I] Migrator: exec migration id: create dashboard_tag table
2015/10/16 03:59:24 [I] Migrator: exec migration id: add unique index dashboard_tag.dasboard_id_term
2015/10/16 03:59:25 [I] Migrator: exec migration id: drop index UQE_dashboard_tag_dashboard_id_term - v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: Rename table dashboard to dashboard_v1 - v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: create dashboard v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: create index IDX_dashboard_org_id - v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: create index UQE_dashboard_org_id_slug - v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: copy dashboard v1 to v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: drop table dashboard_v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: alter dashboard.data to mediumtext v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: create data_source table
2015/10/16 03:59:25 [I] Migrator: exec migration id: add index data_source.account_id
2015/10/16 03:59:25 [I] Migrator: exec migration id: add unique index data_source.account_id_name
2015/10/16 03:59:25 [I] Migrator: exec migration id: drop index IDX_data_source_account_id - v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: drop index UQE_data_source_account_id_name - v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: Rename table data_source to data_source_v1 - v1
2015/10/16 03:59:25 [I] Migrator: exec migration id: create data_source table v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: create index IDX_data_source_org_id - v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: create index UQE_data_source_org_id_name - v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: copy data_source v1 to v2
2015/10/16 03:59:25 [I] Migrator: exec migration id: Drop old table data_source_v1 #2
2015/10/16 03:59:26 [I] Migrator: exec migration id: create api_key table
2015/10/16 03:59:26 [I] Migrator: exec migration id: add index api_key.account_id
2015/10/16 03:59:26 [I] Migrator: exec migration id: add index api_key.key
2015/10/16 03:59:26 [I] Migrator: exec migration id: add index api_key.account_id_name
2015/10/16 03:59:26 [I] Migrator: exec migration id: drop index IDX_api_key_account_id - v1
2015/10/16 03:59:26 [I] Migrator: exec migration id: drop index UQE_api_key_key - v1
2015/10/16 03:59:26 [I] Migrator: exec migration id: drop index UQE_api_key_account_id_name - v1
2015/10/16 03:59:26 [I] Migrator: exec migration id: Rename table api_key to api_key_v1 - v1
2015/10/16 03:59:26 [I] Migrator: exec migration id: create api_key table v2
2015/10/16 03:59:26 [I] Migrator: exec migration id: create index IDX_api_key_org_id - v2
2015/10/16 03:59:26 [I] Migrator: exec migration id: create index UQE_api_key_key - v2
2015/10/16 03:59:26 [I] Migrator: exec migration id: create index UQE_api_key_org_id_name - v2
2015/10/16 03:59:26 [I] Migrator: exec migration id: copy api_key v1 to v2
2015/10/16 03:59:26 [I] Migrator: exec migration id: Drop old table api_key_v1
2015/10/16 03:59:26 [I] Migrator: exec migration id: create dashboard_snapshot table v4
2015/10/16 03:59:26 [I] Migrator: exec migration id: drop table dashboard_snapshot_v4 #1
2015/10/16 03:59:26 [I] Migrator: exec migration id: create dashboard_snapshot table v5 #2
2015/10/16 03:59:26 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_key - v5
2015/10/16 03:59:26 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_delete_key - v5
2015/10/16 03:59:26 [I] Migrator: exec migration id: create index IDX_dashboard_snapshot_user_id - v5
2015/10/16 03:59:27 [I] Migrator: exec migration id: alter dashboard_snapshot to mediumtext v2
2015/10/16 03:59:27 [I] Created default admin user: admin
2015/10/16 03:59:27 [I] Listen: http://0.0.0.0:3000/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
..Grafana is up and running.
Creating default influxdb datasource...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   243  100    37  100   206   1704   9492 --:--:-- --:--:-- --:--:--  9809
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=ac5761b9d8f99b4f; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Fri, 16 Oct 2015 03:59:27 GMT
Content-Length: 37

{"id":1,"message":"Datasource added"}
Importing default dashboards...
Importing /dashboards/cluster.json ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 32167  100    60  100 32107    813   425k --:--:-- --:--:-- --:--:--  435k
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=0d38b4ca3c380add; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Fri, 16 Oct 2015 03:59:27 GMT
Content-Length: 60

{"slug":"kubernetes-cluster","status":"success","version":0}
Done importing /dashboards/cluster.json
Importing /dashboards/containers.json ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10483  100    52  100 10431   1508   295k --:--:-- --:--:-- --:--:--  308k
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=e36e3d71df389ce3; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Fri, 16 Oct 2015 03:59:27 GMT
Content-Length: 52

{"slug":"containers","status":"success","version":0}
Done importing /dashboards/containers.json

Bringing Grafana back to the foreground
exec /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini cfg:default.paths.data=/var/lib/grafana cfg:default.paths.logs=/var/log/grafana
              Dload  Upload   Total   Spent    Left  Speed
100 32167  100    60  100 32107    813   425k --:--:-- --:--:-- --:--:--  435k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10483  100    52  100 10431   1508   295k --:--:-- --:--:-- --:--:--  308k
2015/10/16 04:06:14 [I] Completed /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/css/grafana.dark.min.4efc02b6.css 404 Not Found in 2.125149ms
2015/10/16 04:06:14 [I] Completed /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/app/app.3c38f44f.js 404 Not Found in 1.696634ms

@vishh From the last 2 lines, it seems there are missing css/js files in the grafana container.

I have tried using nodePort as shown below. Same result, no luck.

grafana-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP. 
  # type: LoadBalancer
  type: "NodePort"
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 30015
  selector:
    name: influxGrafana

It might be a href issue. Html source code shows href has wrong relative url.
So I changed influxdb-grafana-controller.yaml as shown below.

          - name: GF_SERVER_ROOT_URL
            value: /
            #value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

Now I can see the Grafana UI with proxy url, but can't get any content. Browser pop-up shows the server could not find the requested resource
https://10.128.112.11/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

Luckily I have nodePort access too.
http://10.128.112.11:30015/
It works!

Hopefully there will be a fix for both proxy url and nodePort url.

Which version of Kubernetes were you using? I believe @vishh made some changes to kube-proxy on the Kubernetes side to make Grafana work via the proxy URL.

I'm using v1.0.6, here are some proves.

$ ./kubectl version
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
$ ./kube-proxy --version
Kubernetes v1.0.6

Am I missing some parameters in kube-proxy?

kube-proxy \
  --master=https://10.128.112.11:443 \
  --kubeconfig=/srv/kube-proxy/kubeconfig \
  --logtostderr=true

See my explanation of the problem in #249.

I believe this has been fixed by kubernetes/kubernetes@2a075cd, which made it into v1.1.1. I think this issue can be closed.

I'll take your word for it!
Since K8S v1.1.1 upgrade to docker 1.8.3 (#15719) and CoreOS stable is still on 1.7.1, I'm not ready to check this issue.

the problem still happens to me in K8S 1.1.7 so i suggest to open that issue again

+1

This happened to another user with k8s 1.2:
https://stackoverflow.com/questions/37993263/grafana-not-showing-in-kubernetes-heapster

Can we reopen the issue?

/cc a couple of maintainers: @vishh @mwielgus @piosz

piosz commented

The solution is posted by the asker http://stackoverflow.com/a/38039069

Flanneld is running on my master but I'm getting 404's too and an empty page with a button and {{alert.title}}. Kubernetes 1.2.4 with ubuntu as K8s provider.

1.2.3.4 - - [29/Jul/2016:12:22:11 +0200] "GET /monitoring HTTP/1.1" 200 1247 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"
1.2.3.4 - - [29/Jul/2016:12:22:11 +0200] "GET /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/public/css/grafana.dark.min.a95b3754.css HTTP/1.1" 404 19 "https://mytestingdomain.net/monitoring" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"
1.2.3.4 - - [29/Jul/2016:12:22:11 +0200] "GET /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/public/app/app.ca0ab6f9.js HTTP/1.1" 404 19 "https://mytestingdomain.net/monitoring" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"

Maybe I've made an error in my nginx proxy configuration but the K8s Dashboard works so I think it should be OK:

        location /monitoring {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                #try_files $uri $uri/ =404;
                # Uncomment to enable naxsi on this location
                # include /etc/nginx/naxsi.rules
                proxy_pass http://10.10.100.1:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/;
                include /etc/nginx/proxy_params;
        }
piosz commented

This seems like a problem with proxy on master. AFAIR I've seen similar problems with accessing web pages via the master proxy. Please create an issue on https://github.com/kubernetes/kubernetes

rxwen commented

I'm using kubernetes 1.4.4 with weave overlay network. And I encountered this problem too, how can I make it work?
I tried setting GF_SERVER_ROOT_URL to "/", but the grafana pages showed no metrics at all.

rxwen commented

Never mind, my mistake. I can see metrics.
But I guess modify GF_SERVER_ROOT_URL isn't the desired way to fix the problem. Any better way when using weave network?

faced same issue.

As I am running K8s on AWS. I created a ELB as the load balance to bypass this issue.

kubectl expose --namespace=kube-system deployment monitoring-grafana --port=80 --target-port=3000 --name=monitoring-grafana-newservice --type=LoadBalancer

The command above will create a ELB base on the deployment monitoring-grafana, and ELB will listen on port 80.
To access, hit your ELB URL directly or matching the ELB name to your public/private DNS name via Route53.

To me, the link is: http://a0cd682e50aa611e79a0202ddbfd8337-xxxxxx.ap-southeast-2.elb.amazonaws.com/

I get this error with kubectl proxy but using kubectl port-forward --namespace=kube-system <grafana-pod> 3000:3000 & works