canonical/kubeflow-ci

add `kubectl describe nodes` to logdump

Opened this issue · 0 comments

The kubectl crashdump included in the logdump action does not output the resource utilization table like kubectl describe nodes:

Non-terminated Pods:          (35 in total)
  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
  metallb-system              speaker-vgxb7                                    100m (1%)     100m (1%)   100Mi (0%)       100Mi (0%)     18h
  metallb-system              controller-5d468955f-7bmcm                       100m (1%)     100m (1%)   100Mi (0%)       100Mi (0%)     18h
  kube-system                 calico-node-6xc24                                250m (3%)     0 (0%)      0 (0%)           0 (0%)         18h
  kube-system                 calico-kube-controllers-6646556cff-2gf7z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
  kube-system                 coredns-66bcf65bb8-8k9d9                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18h
  kube-system                 hostpath-provisioner-78cb89d65b-8dgzl            0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
  controller-local-microk8s   modeloperator-8499945f6-nzn8t                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
  kubeflow                    modeloperator-6d56c8d64-t6fqp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
  controller-local-microk8s   controller-0                                     0 (0%)        0 (0%)      3Gi (4%)         3Gi (4%)       18h
  kubeflow                    istiod-58687f9d68-4lkzk                          500m (6%)     0 (0%)      2Gi (3%)         0 (0%)         24m
  kubeflow                    istio-ingressgateway-0                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  kubeflow                    istio-ingressgateway-workload-5dcdfb989-g6xrz    10m (0%)      2 (25%)     40Mi (0%)        1Gi (1%)       24m
  kubeflow                    istio-pilot-0                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
  kubeflow                    knative-operator-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
  knative-serving             autoscaler-bc7d6c9c9-fl6q2                       100m (1%)     1 (12%)     100Mi (0%)       1000Mi (1%)    22m
  knative-serving             activator-5f6b4bf5c8-ghmtt                       300m (3%)     1 (12%)     60Mi (0%)        600Mi (0%)     22m
  knative-serving             controller-687d88ff56-6ftqj                      100m (1%)     1 (12%)     100Mi (0%)       1000Mi (1%)    22m
  kubeflow                    knative-serving-0                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
  knative-serving             domain-mapping-69cc86d8d5-dskcc                  30m (0%)      300m (3%)   40Mi (0%)        400Mi (0%)     21m
  knative-serving             domainmapping-webhook-65dfdd9b96-jhghd           100m (1%)     500m (6%)   100Mi (0%)       500Mi (0%)     21m
  knative-serving             webhook-587cdd8dd7-5h4d5                         100m (1%)     500m (6%)   100Mi (0%)       500Mi (0%)     21m
  kubeflow                    knative-eventing-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
  knative-serving             autoscaler-hpa-6469fbb6cd-xxqcl                  30m (0%)      300m (3%)   40Mi (0%)        400Mi (0%)     21m
  knative-serving             net-istio-controller-5fc4cc65f7-ttsj7            30m (0%)      300m (3%)   40Mi (0%)        400Mi (0%)     21m
  knative-serving             net-istio-webhook-6c5b7cbdd5-7h5mn               20m (0%)      200m (2%)   20Mi (0%)        200Mi (0%)     21m
  knative-eventing            eventing-webhook-7d5b577c94-9z9bg                100m (1%)     200m (2%)   50Mi (0%)        200Mi (0%)     21m
  knative-eventing            imc-controller-769d8b7f66-zxlx2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
  knative-eventing            imc-dispatcher-55979cf74b-chb24                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
  knative-eventing            mt-broker-filter-56b5d6d697-89cqt                100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21m
  knative-eventing            mt-broker-ingress-5c4d45dfd6-zklhx               100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21m
  knative-eventing            eventing-controller-7f448655c8-8zdml             100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21m
  knative-eventing            mt-broker-controller-66b756f8bb-rdzsk            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21m
  kubeflow                    otel-collector-5b65fb49bc-75x4z                  50m (0%)      0 (0%)      100Mi (0%)       0 (0%)         16m
  kubeflow                    prometheus-scrape-config-k8s-0                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
  kubeflow                    prometheus-k8s-0                                 250m (3%)     250m (3%)   200Mi (0%)       0 (0%)         11s

That would be useful in the log dump