opsgenie/kubernetes-event-exporter

not able to send notification to ms_teams reciver

nkol2307 opened this issue · 1 comments

this is my configmap:

apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-config
namespace: monitoring
data:
config.yaml: |
logLevel: debug
logFormat: json
route:
# Main route
routes:
# This route allows dumping all events because it has no fields to match and no drop rules.
- match:
- receiver: dump
# This starts another route, drops all the events in default namespaces and Warning events
# for capturing critical events
- drop:
- namespace: "*default"
- type: "Warning"
- receiver: "ms_teams"
# match:
# - receiver: "critical-events-queue"
# This a final route for user messages
- match:
- kind: "Pod|Deployment|ReplicaSet|StatefulSet|DaemonSet|Service"
- receiver: "ms_teams"
receivers:
- name: "ms_teams"
teams:
endpoint: ".........webhook link to teams channel"
layout: # Optional

and logs from pod
2021-08-13T12:06:03Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=dump {} 2021-08-13T12:06:03Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=wdwneo4j-neo4j-core-2 msg="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" namespace=default reason=NotTriggerScaleUp 2021-08-13T12:06:03Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=dump {} 2021-08-13T12:08:54Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=map-db-0 msg="successfully rotated K8s secret map-db-secrets-kv" namespace=default reason=SecretRotationComplete 2021-08-13T12:08:54Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="successfully rotated K8s secret map-db-secrets-kv" sink=dump {} 2021-08-13T12:11:05Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=wdwneo4j-neo4j-core-1 msg="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" namespace=default reason=NotTriggerScaleUp 2021-08-13T12:11:05Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=dump {} 2021-08-13T12:11:05Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=wdwneo4j-neo4j-core-2 msg="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" namespace=default reason=NotTriggerScaleUp 2021-08-13T12:11:05Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=dump {} 2021-08-13T12:11:05Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=wdwneo4j-neo4j-core-0 msg="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" namespace=default reason=NotTriggerScaleUp 2021-08-13T12:11:05Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=dump {} 2021-08-13T12:14:54Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=map-db-0 msg="successfully rotated K8s secret map-db-secrets-kv" namespace=default reason=SecretRotationComplete 2021-08-13T12:14:54Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="successfully rotated K8s secret map-db-secrets-kv" sink=dump {} 2021-08-13T12:16:07Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=wdwneo4j-neo4j-core-0 msg="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" namespace=default reason=NotTriggerScaleUp 2021-08-13T12:16:07Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=dump {} 2021-08-13T12:16:07Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/kube/watcher.go:64 > Received event involvedObject=wdwneo4j-neo4j-core-1 msg="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" namespace=default reason=NotTriggerScaleUp 2021-08-13T12:16:07Z DBG bitnami/blacksmith-sandox/kubernetes-event-exporter-0.10.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/exporter/channel_registry.go:56 > sending event to sink event="pod didn't trigger scale-up: 6 node(s) had volume node affinity conflict, 1 max node group size reached" sink=du

On behalf of @nkol2307. Issue was solved. Was a problem on our side. ;)