kubectl create deployment zipkin --image openzipkin/zipkin
kubectl expose deployment zipkin --type ClusterIP --port 9411
Apply this configuration:
TODO: Check why this is in default namespace - seems less than ideal given rest of observability stack is in dapr-monitoring
kubectl apply -f deploy/dapr/tracing.yaml
Following the instructions on Dapr Prometheus Docs worked out of the box somehow - wasn't expecting that as typically there is more configuration.
TODO: Investigate how the scraping is happening - or, is dapr using pushgateway?
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring --set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
TODO: Debug why persistence isn't working properly with elasticsearch TODO: Debug why elasticsearch sometimes does not launch
To clear out existing logging stack (currently, relaunching the stack doesn't work.)
kubectl delete ns dapr-monitoring
kubectl delete -f "deploy/dapr/fluentd-*.yaml"
kubectl create ns dapr-monitoring
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1
helm install kibana elastic/kibana -n dapr-monitoring
Currently, the manifests need to be patched to include the correct password:
In ./deploy/dapr/fluentd-dapr-with-rbac.yaml
, update FLUENT_ELASTICSEARCH_PASSWORD
to the value below:
kubectl get secrets --namespace=dapr-monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
Then apply the manifests
kubectl apply -f "deploy/dapr/fluentd-*.yaml"
With your dapr app running, open a port-forward, and configure kibana:
kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring
- User: elastic
- Password:
- Create data view
a. name -
dapr
b. index pattern -dapr*
c. timestamp -@timestamp