vmware-archive/kubewatch

Custom format for webhook handler

bugok opened this issue · 7 comments

bugok commented

This is a feature request or a request for feedback on how to implement.

I would like to use kubewatch to send kubernetes events to an HTTP endpoint, but with a specific structure.
Imagine the following:

{
    "time": 1613222106, // this is unixtime
    "numbers": { // All numeric values should be here (except time). Currently, I didn't think about any
    },
    "strings: { // All string values should be here
        "text": "Event text",
        "name": "event name",
        "namespace": "my_namespace",
        "reason": "reason",
        "my_custon_constant_key": "my_custom_constant_value",
        "my_other_custon_constant_key": "my_other_custom_constant_value"
    }
}

I thought about several ways of doing this, and I was thinking about getting some agreement to upstream the changes, if it makes sense for the maintainers.

Assume I'll add a new option / environment variable to allow a custom json format something like this (implementing my needed format from the example):

{
    "time": $EVENT_TIME$
    "numbers": { 
    },
    "strings: {
        "text": $EVENT_TEXT$,
        "name": $EVENT_NAME$,
        "namespace": $EVENT_NAMESPACE$,
        "reason": $EVENT_REASON$,
        "my_custon_constant_key": "my_custom_constant_value",
        "my_other_custon_constant_key": "my_other_custom_constant_value"
   }
}

Using the $EVENT_*$ markers will indicate the handler to replace the marker with the needed event values. This format will allow flexibility to define any needed json format. However, as a golang-noob, I'm not sure about how dynamic json parsing would be clean in golang.

Another option to make this work for me is to fork this repo and maintain a fork of the source code and the docker image (I could probably get away with the helm chart fork). It's possible, but if this is something which makes sense for the maintainers to have as a feature, I'd be happy to do the work to add it - even if a different way of implementing this makes sense.

Thanks.

aantn commented

I'm not the maintainer, but I'm curious about your usecase. I've forked kubewatch myself to add support for extra json in the webhook, albeit in slightly different way than you.

bugok commented

@aantn : Shalom :)
My use case is that I have an HTTP endpoint which can expect logs to be posted via a HTTP POST. That endpoint expects the request to be in that specific format.

Eventually, I also forked this repo internally and implemented a new handler (based on the webhook handler) which executes the HTTP POST in the format I expect.

I've also made a few other changes:

  • I output everything the event object has.
  • I added a 'cluster' attribute that I pass to the config. That doesn't relate to the events, but having a 'cluster' in the logging sample allows me to dump the logs from all my clusters to the same dataset, where I can filter by cluster.
aantn commented

@bugok Shalom shalom :)

By "everything the event has", I assume you're referring to the actual Kubernetes object that changed? I'm doing the same in my fork here: https://github.com/aantn/kubewatch

Out of curiousity, what is your use-case? An internal tool, I assume?

bugok commented

@aantn : I mean all attributes the event object has: https://github.com/bitnami-labs/kubewatch/blob/84a34db93ff9935ce133f4eb1175187154253685/pkg/event/event.go#L30-L38
As well as the message and the time of the event.

Currently, I just collect the data, but in the future I plan to:

  • Show recent (bad) events on a dashboard
  • Create alerts based on bad events which were logged.

Yes, all these are maintained by internal tools.

aantn commented

@bugok cool. I dump the entire k8s object to json on creates and deletes. For updates I dump both the old and new object so that you can diff them and see which fields changed. If that's useful, feel free to use it and send me any questions.

How are you doing the logic like determining which events are bad or when/how you should create alerts?

bugok commented

@aantn : I haven't yet defined the alerts. However, I'm planning to start with filtering events with status=Danger. I see that I get messages with CrashLoopBackOff, for example.

aantn commented

@bugok You might be interested in an open source I wrote, http://robusta.dev/

We implemented what you mentioned