Falcosidekick multiple notifications to Slack
avramenkovladyslav opened this issue · 3 comments
Describe the bug
Falcosidekick multiple notifications to Slack. After configuring alerts and slack webhook I'm receiving as much identical messages as nodes I have
How to reproduce it
Helm values:
falco:
grpc:
enabled: true
grpc_output:
enabled: true
http_output:
enabled: true
url: "http://falco-falcosidekick.falco.svc.cluster.local:2801"
falcosidekick:
enabled: true
config:
debug: true
slack:
outputformat: fields
messageformat: '*{{ .Rule }}* on *{{ index .OutputFields "k8s.pod.name" }}*'
minimumpriority: "emergency"
webhookurl: $URL
customRules:
rules-shell.yaml: |-
- rule: Terminal shell
desc: Detects when a pod tries to request external shell
condition: >
k8s.ns.name = "default" and
container.id != host and
evt.type = execve and
(proc.pname = bash or
proc.pname = sh) and
proc.cmdline != bash
output: Terminal shell in
(pod_name=%k8s.pod.name
container_name=%container.name)
priority: EMERGENCY
tags: [custom_rules]
Trigger terminal shell in container
Expected behaviour
Only one message should appear
Environment
- Falco version: Helm 3.8.5, falco 0.36.2 (latest)
- System info:
"machine": "x86_64",
"nodename": "falco-8f48l",
"release": "5.10.198-187.748.amzn2.x86_64",
"sysname": "Linux",
"version": "#1 SMP Tue Oct 24 19:49:54 UTC 2023"
- Cloud provider or hardware configuration: AWS EKS
- OS: Debian GNU/Linux 12
- Kernel:
Linux falco-8f48l 5.10.198-187.748.amzn2.x86_64 #1 SMP Tue Oct 24 19:49:54 UTC 2023 x86_64 GNU/Linux
- Installation method: Kubernetes Helm
Additional context
Hi,
This is not an issue at Falcosidekick level. Basically, the same rule is triggered several times by Falco, and Falcosidekick forward them all. There's not any de-duplication mechanism. If you check the Falco logs you'll see all occurrences.
To be more precise, when you exec
into a pod to create a shell, for a reason I don't know, several threads will be created, each of them will trigger the rule. The output of the rule doesn't reflect that for now, but if you edit it to add thread.tid
you will see they are different. For now, you can also notice it by the timestamps, even if they're close they're different, it means those are different events.
@Issif I see. Maybe you know any workaround to reduce the amount of useless information, like filtering by field comparison of recent alerts or something like that?
No solution for now for your use case. It more relies on a SIEM or else.