stefanprodan/k8s-prom-hpa

How to configure the Istio Prometheus in the custom metric api.

arunkumarmurugesan opened this issue · 17 comments

Hi Team,

I have followed the Readme and able to scale up/down the pods using http_request.

Typically, I want to scale up/down the HPA based on Istio Prometheus http_request value. Hence, I have tried to configure the Istio Prometheus in custom metric API server afterward unable to get the http_request metric values.

Just changed the Prometheus endpoint in the custom-metrics-apiserver-deployment.yaml.

 - name: custom-metrics-apiserver
    image: quay.io/coreos/k8s-prometheus-adapter-amd64:v0.2.0
    args:
    - /adapter
    - --secure-port=6443
    - --tls-cert-file=/var/run/serving-cert/serving.crt
    - --tls-private-key-file=/var/run/serving-cert/serving.key
    - --logtostderr=true
    - --prometheus-url=http://prometheus.aruntest.tk:9090/  # its a  public domain
    - --metrics-relist-interval=30s
    - --rate-int

Error:

root@ip-172-21-15-19:/k8s-prom-hpa/custom-metrics-api# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/http_requests" | j
q .
Error from server (NotFound): the server could not find the metric http_requests for pods
root@ip-172-21-15-19:
/k8s-prom-hpa/custom-metrics-api#

@arunkumarmurugesan the metric name is istio_requests_total

Hi @stefanprodan,

tried to query the aforementioned metric name still getting same error.

root@ip-172-21-15-19:# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/istio_requests_total" | jq .
Error from server (NotFound): the server could not find the metric istio_requests_total for pods
root@ip-172-21-15-19:
#

can you use code tags when you post kubectl result please

Hi @stefanprodan,
Sure.

tried to query the aforementioned metric name still getting same error.

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/istio_requests_total" | jq .
Error from server (NotFound): the server could not find the metric istio_requests_total for pods

is the default namespace labeled with istio injection? do you run any pods there?

@stefanprodan
I think no.

NAMESPACE NAME READY STATUS RESTARTS AGE
default details-v1-7f4b9b7775-tjz4w 2/2 Running 0 1d
default podinfo-65fc48f955-rck65 1/1 Running 0 2d
default podinfo-65fc48f955-xxbhs 1/1 Running 0 2d
default productpage-v1-586c4486b7-66hhp 2/2 Running 0 1d
default ratings-v1-7bc49f5779-p9vsv 2/2 Running 0 1d
default reviews-v1-b44bd5769-xwhkv 2/2 Running 0 1d
default reviews-v2-6d87c8c5-4zfx2 2/2 Running 0 1d
default reviews-v3-79fb5c99d5-pnsw9 2/2 Running 0 1d
istio-system grafana-6f6dff9986-ssg5k 1/1 Running 0 1d
istio-system istio-citadel-7bdc7775c7-tfqdl 1/1 Running 0 1d
istio-system istio-cleanup-old-ca-2lmwx 0/1 Completed 0 1d
istio-system istio-egressgateway-795fc9b47-5ph9q 1/1 Running 0 1d
istio-system istio-ingressgateway-7d89dbf85f-b9zkg 1/1 Running 0 1d
istio-system istio-mixer-post-install-pbdw5 0/1 Completed 0 1d
istio-system istio-pilot-66f4dd866c-xgjfb 2/2 Running 0 1d
istio-system istio-policy-76c8896799-dbkj7 2/2 Running 0 1d
istio-system istio-sidecar-injector-645c89bc64-l6lqw 1/1 Running 0 1d
istio-system istio-statsd-prom-bridge-949999c4c-j2f58 1/1 Running 0 1d
istio-system istio-telemetry-6554768879-7gbrm 2/2 Running 0 1d
istio-system istio-tracing-754cdfd695-lmt9q 1/1 Running 0 1d
istio-system prometheus-86cb6dd77c-2ff7b 1/1 Running 0 1d
istio-system servicegraph-5849b7d696-lvt6l 1/1 Running 0 1d
kube-system calico-kube-controllers-69c6bdf999-ssbl5 1/1 Running 1 1d
kube-system calico-node-bvv6g 2/2 Running 0 1d
kube-system calico-node-trzcz 2/2 Running 0 2d
kube-system calico-node-vkzmj 2/2 Running 0 2d
kube-system dns-controller-646c6b4d46-cb9z2 1/1 Running 0 1d
kube-system etcd-server-events-ip-172-20-39-71.us-west-1.compute.internal 1/1 Running 0 1d
kube-system etcd-server-ip-172-20-39-71.us-west-1.compute.internal 1/1 Running 0 1d
kube-system kube-apiserver-ip-172-20-39-71.us-west-1.compute.internal 1/1 Running 1 1d
kube-system kube-controller-manager-ip-172-20-39-71.us-west-1.compute.internal 1/1 Running 0 1d
kube-system kube-dns-5fbcb4d67b-csrtc 3/3 Running 0 2d
kube-system kube-dns-5fbcb4d67b-znsdt 3/3 Running 0 2d
kube-system kube-dns-autoscaler-6874c546dd-8npdf 1/1 Running 0 2d
kube-system kube-proxy-ip-172-20-33-56.us-west-1.compute.internal 1/1 Running 0 2d
kube-system kube-proxy-ip-172-20-39-71.us-west-1.compute.internal 1/1 Running 0 1d
kube-system kube-proxy-ip-172-20-55-205.us-west-1.compute.internal 1/1 Running 0 2d
kube-system kube-scheduler-ip-172-20-39-71.us-west-1.compute.internal 1/1 Running 0 1d
kube-system metrics-server-6fbfb84cdd-zrnw5 1/1 Running 0 2d
monitoring custom-metrics-apiserver-66ccb8774-65q56 1/1 Running 0 1m
monitoring prometheus-7dff795b9f-48s8s 1/1 Running 0 2d

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/istio-system/pods/*/istio_requests_total" | jq .
Error from server (NotFound): the server could not find the metric istio_requests_total for pods

@arunkumarmurugesan were you able to get istio_requests_total as a custom metrics using the adapter. I am trying to do the same thing , but unable to get that istio_requests_total as part of custom metrics

@monson . yes, I can able to get it. please try to use latest docker images for Prometheus which helps to pull the request_total for you.

facing the same issue. did u manage to make it work?

@arunkumarmurugesan - hi i've installed the chart of the prometheus-adapter and then replaced image with ltest - before that it was version 0.4.1.
when i kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 i see some istio metrics about envoys such as instances.config.istio.io/envoy_tcp_mixer_filter_total_remote_report_calls
and so on. so it is able to take metrics from prometheus of istio. but i dont see istio_requests_total. cant make it work. can you tell what was your steps including what you installed?
Thanks!

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/staging/pods/*/istio_requests_total"
Error from server (NotFound): the server could not find the metric istio_requests_total for pods

The istio_requests_total it's not a per pod metric. The Istio metrics are generated by the telemetry service (Istio Mixer) and are labeled with the workload name and namespace.

yeah. after sometime i realised that. so i have the current metrics (portion of it) below that the adapter takes from prometheus that scrapes istio envoys (and istio components as well)
and i guess i want custom metrics to be able to show metrics per pod on the total Requests per second for my microservice. from searches i understand that i needs a metrics that its prefix is namespaces for hpa to work. so what should i go with? envoy_http_async_client_rq look best fit but its not namespaces.. so im really lost here.

{{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "jobs.batch/envoy_cluster_upstream_flow_control_paused_reading",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "jobs.batch/envoy_cluster_upstream_cx_connect_timeout",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_listener_manager_lds_update_failure",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_listener_admin_downstream_cx",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_cluster_upstream_cx_destroy_local_with_active_rq",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_listener_manager_listener_create_success",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "jobs.batch/envoy_server_uptime",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "jobs.batch/envoy_server_watchdog_miss",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "instances.config.istio.io/envoy_cluster_http2_rx_reset",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_cluster_manager_cds_update_success",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_filesystem_reopen_failed",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/envoy_listener_manager_listener_added",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}, ..........
]
}

i am not even able to get values from these metrics. im trying eveyrtying for exmaple
kubectl get --raw "/apis/custom.metrics.k8s.io/namespaces/auto-staging/envoy_http_mixer_filter_total_blocking_remote_quota_calls/pods/*/pod-name-xxxx

kubectl get --raw "/apis/custom.metrics.k8s.io/namespaces//envoy_http_mixer_filter_total_blocking_remote_quota_calls/pods//

kubectl get --raw "/apis/custom.metrics.k8s.io/namespaces/auto-staging/pods/*/envoy_http_mixer_filter_total_blocking_remote_quota_calls

I've made a new repo for HPA with Istio metrics https://github.com/stefanprodan/istio-hpa

@iftachsc
I was managed to make it work using the envoy envoy_http_rq_total metrics in pod leve
can you share how do you get this metrics in a pod, i'm following https://istio.io/docs/ops/configuration/telemetry/envoy-stats/ to enable all the static metrics, but didn't see envoy_http_rq_total in the pod sidecar with command kubectl exec -it httpbin2-ffc85bc7-zd5jw -c istio-proxy -- curl -s localhost:15090/stats/prometheus

@stefanprodan What is the difference between istio_requests_total and envoy_http_rq_total , which is the right one to use?