suzuki-shunsuke/go-graylog

Support new Alerts & Events API

Closed this issue ยท 29 comments

http://127.0.0.1:9000/api/api-browser/#!/Events/Definitions

Events/Definitions : Event definition management

POST /events/definitions Create new event definition
GET /events/definitions/{definitionId} Get an event definition
PUT /events/definitions/{definitionId} Update existing event definition
DELETE /events/definitions/{definitionId} Delete event definition

In general, Suzuki, did you try graylog provider with version 3.1, is it compatible?

We have found in v3.1, alerts provisioned with this provider will not work until the Graylog master is rebooted, as this will run a migration job.

https://docs.graylog.org/en/3.1/pages/upgrade/graylog-3.1.html#alerts

Hello, any news about this?
It's very hard to use it now since as @cameronattard wrote - the Graylog server need to be rebooted...
Can I help somehow? (like upvoting graylog issue or something).

Hi, I have good news.
I could create and update Event Definition and Event Notification by Graylog REST API, although I can't do it with Graylog API brower due to the bug of Graylog API brower.

https://gist.github.com/suzuki-shunsuke/39e31d46f195863d3298a8146df3ff8b

I'll try to handle this issue.

Support Event Notification: #198

I have released the new version v8.4.0.
Please read the document and example.

  • Support Event Definition

Support Event Definition. #199

I have released the new version v8.5.0.
Please read the document and example.

The Event Definition API and Event Notification API have been supported, so I think we can close this issue.
Do you have any opinions?

I will try this out today and let you know how it goes. Thanks!

It seems connecting an event definition to an event notification is not currently working for me:

  notifications {
    notification_id = "${graylog_event_notification.pci_slack.id}"
  }

In the terraform state, I can see the notification configured:

                            "notifications.#": "1",
                            "notifications.0.notification_id": "5def010e751e9f000f7febd5",

But in the Graylog API browser (GET /events/definitions), it is not configured:

      "notifications": [],

@cameronattard Thank you for your feedback!
I'll check.

I have released the new version v8.5.1.
Please check.

I just played a bit with it, and I have some problems how to write the config.
My use case is: there is a stream with severity ERROR, I want to be alerted when the message count in the stream in last 5 mins is > 0.
If I create such event definition with UI the api browser shows something like this:

      "id": "5df1245fdae272002352df02",
      "title": "test",
      "description": "",
      "priority": 2,
      "alert": false,
      "config": {
        "type": "aggregation-v1",
        "query": "",
        "streams": [
          "5df0d1dd5ba75d0023815986"
        ],
        "group_by": [],
        "series": [
          {
            "id": "32f6a303-6a1c-449b-b1e0-24e830d41a00",
            "function": "count",
            "field": null
          }
        ],
        "conditions": {
          "expression": {
            "expr": ">",
            "left": {
              "expr": "number-ref",
              "ref": "32f6a303-6a1c-449b-b1e0-24e830d41a00"
            },
            "right": {
              "expr": "number",
              "value": 1
            }
          }
        },
        "search_within_ms": 300000,
        "execute_every_ms": 60000
      },
      "field_spec": {},
      "key_spec": [],
      "notification_settings": {
        "grace_period_ms": 0,
        "backlog_size": 0
      },
      "notifications": [],
      "storage": [
        {
          "type": "persist-to-streams-v1",
          "streams": [
            "000000000000000000000002"
          ]
        }
      ]
    }

So the conditions references the series with some id.
Do you have an idea how to create such event definition?
I know it's more about Graylog itself and not the provider, but maybe someone knows how to do it..
Thanks!

@gkocur it seems you can provide any arbitrary string as the series ID. Perhaps you could use https://www.terraform.io/docs/providers/random/r/uuid.html.

@suzuki-shunsuke looks like the notifications are now being set correctly, thanks for fixing so quickly!

I'll fix some bugs.
#202

@cameronattard yes, I was trying to sniff what is sent to the api from web client and you are right, it seems the uuids are generated on client side. Thank you!

I have released the new version v8.5.3.
Please check.

If there is no problem, I'll close this issue.

Just tried 8.5.3 generally it works, but have small problem: if I run terraform apply and then terraform plan it shows me changes it want make.
Running apply again didn't solve it.
My definition:

resource "graylog_event_definition" "errors-in-logs" {
  title = "${var.env_name}-errors"
  alert = true
  priority = 2
  notifications {
    notification_id = graylog_event_notification.environment_slack_notification.id
  }
  config = <<EOF
{
  "type": "aggregation-v1",
  "query": "*",
  "streams": [
    "${graylog_stream.api-errors-alerts.id}",
    "${graylog_stream.trail-errors.id}",
    "${graylog_stream.smapi-errors.id}"
  ],
  "group_by": [],
  "series": [
    {
      "id": "${random_uuid.event_definition_errors_in_logs.result}",
      "function": "count",
      "field": null
    }
  ],
  "conditions": {
    "expression": {
      "expr": ">",
      "left": {
        "expr": "number-ref",
        "ref": "${random_uuid.event_definition_errors_in_logs.result}"
      },
      "right": {
        "expr": "number",
        "value": 0
      }
    }
  },
  "search_within_ms": 300000,
  "execute_every_ms": 60000
}
EOF
  notification_settings {
    grace_period_ms = 300000
    backlog_size = 0
  }
}

the terraform plan returns:

Terraform will perform the following actions:

  # module.graylog-common.graylog_event_definition.errors-in-logs will be updated in-place
  ~ resource "graylog_event_definition" "errors-in-logs" {
        alert      = true
      ~ config     = jsonencode(
          ~ {
                conditions       = {
                    expression = {
                        expr  = ">"
                        left  = {
                            expr = "number-ref"
                            ref  = "b2f16272-8502-b8f0-f14c-93a3d0771a60"
                        }
                        right = {
                            expr  = "number"
                            value = 0
                        }
                    }
                }
                execute_every_ms = 60000
                group_by         = []
                query            = "*"
                search_within_ms = 300000
              ~ series           = [
                  ~ {
                        field    = null
                        function = "count"
                        id       = "b2f16272-8502-b8f0-f14c-93a3d0771a60"
                    },
                ]
              ~ streams          = [
                  + "5df0d0ece40428002325e875",
                    "5de77d9e5ba75d0023780403",
                    "5de77d9eab60bc00246a70cd",
                  - "5df0d0ece40428002325e875",
                ]
                type             = "aggregation-v1"
            }
        )
        field_spec = jsonencode({})
        id         = "5df2547cdae27200235505ce"
        priority   = 2
        title      = "gseprod-errors"

        notification_settings {
            backlog_size    = 0
            grace_period_ms = 300000
        }

        notifications {
            notification_id = "5df25416dae272002355052b"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

I was able to workaround it setting the streams in other order - so the ids are sorted alphabetically. So it looks for some sort problem?

@gkocur
Thank you for your feedback.
As you said it is the sort problem.
It seems that the order of streams should be ignored.
So I fix it. #205

I have released the new version v8.5.4.
Please check.

@suzuki-shunsuke thank you for quick solve! No it works ok, I don't see any problems now :).

I close this issue.

@gkocur @cameronattard
Thank you for your contribution!