criblio/appscope

Update Prometheus Metrics Support for Kubernetes

ricksalsa opened this issue · 4 comments

Update our existing code to utilize the statsd exporter for converting our statsd existing metrics into the correct Prometheus exposition format.

  • Update scope k8s cli output to add statsd exporter container image
    • Add prometheus
  • Update AppScope Helm chart to ship the statsd exporter
  • Remove promserver.go
  • Remove prometheus from [--metricformat] (CLI)
  • in libscope ifdef out the support for prometheus format metrics
  • modify crash dump in scope-ebpf repo to emit statsd instead of prometheus

cmd.Flags().StringVarP(&rc.MetricsDest, "metricdest", "m", "", "Set destination for metrics (host:port defaults to tls://)")

Next steps:

  • handle sopce epbf
  • remote obsolete parameter promport
  • work on helm chart support

Open questions:

  • What image policy should be usted for statsd exporter - I suggest "Always"
  • What name should we use the naming Prometheus Exporter -> Statsd Exporter or should we leave prom-exporter? - I suggest "prom-exporter"
  • Should we support deployment of statsd exporter with scope k8s command or should it be done expliclity only in helm chart ? - I suggest do it in both places
  • Should we keep statsd exporter be in the same pod as the appscope webhook server ? I suggest yes
  • Should appscope prefix for the metric data coming from appscope be done via mapping-config file from the statsd-exporter or via support for statsd prefix in scope cli ?

cc @ricksalsa

What image policy should be used for statsd exporter - I suggest "Always"

Agreed

What name should we use the naming Prometheus Exporter -> Statsd Exporter or should we leave prom-exporter? - I suggest "prom-exporter"

I'd recommend we be more explicit with our naming conventions for the containers. I'd like to see use use appscope-stats-exporter

Should we support deployment of statsd exporter with scope k8s command or should it be done expliclity only in helm chart ? - I suggest do it in both places

Yes, we should have the configuration within the output of the scope k8s subcommand, and in our Helm chart.

Should we keep statsd exporter be in the same pod as the appscope webhook server ? I suggest yes

Agreed

Should appscope prefix for the metric data coming from appscope be done via mapping-config file from the statsd-exporter or via support for statsd prefix in scope cli ?

Having the library add the prefix would simplify the config for the stats exporter, and probably even save a few operations. Let's have the library add the prefix in how we've done it with the current code in the 1.4 branch.

@ricksalsa
Updated according to above with small change:
appscope-stats-exporter -> scope-stats-exporter - as default, see that scope k8s have an additional argument called
"app", which defines "Name of the app in Kubernetes" and the default value is "scope"

QA instruction (example depolyment):

# checkout to the changes placed in #1522
# make build CMD="make all"
# make image

# create cluster
kind create cluster

# deploy the k8s where metrics in statsd format and appscope prefix will go to the statsD exporter (container placed in scope pod) 
./bin/linux/x86_64/scope k8s --metricdest tcp://scope-stats-exporter:9109 --metricformat statsd --metricprefix appscope --eventdest tcp://other.host:10070 | kubectl apply -f -

kind delete cluster