mholt/caddy-ratelimit

howto set consul storage

dazoot opened this issue · 4 comments

I am trying to get the distributed config working.

Currently we have our storage on consul via: https://github.com/pteich/caddy-tlsconsul.

Config looks like this for now:

{
  "handler": "rate_limit",
  "rate_limits": {
    "msft_scanners": {
      "match": [
        {
          "remote_ip": {
            "ranges": [
              "10.10.10.1/24"
            ]
          }
        }
      ], 
      "key": "msft",
      "window": "1m",
      "max_events": 2
    } 
  },
  "distributed": {
    "write_interval": "30s",
    "read_interval": "10s"
  }
}

On start i get an error regarding uuid:

run: loading initial config: loading new config: loading http app module: provision http: server nzm: setting up route handlers: route 0: loading handler modules: position 0: loading module 'rate_limit': provision http.handlers.rate_limit: open /etc/caddyserver/.local/share/caddy/instance.uuid: no such file or directory

Please share your entire config.

How are you running Caddy?

When running in distributed mode, Caddy needs to be able to write its instance ID to file, so make sure Caddy's default filesystem storage location is readable/writable.

That path /etc/caddyserver doesn't look like the standard location; when running as a systemd service, the caddy user's HOME is usually /var/lib/caddy, so you must be doing something the "unofficial" way. Please elaborate on your setup.

As i have mentioned, our storage is consul not shared file system.
https://github.com/pteich/caddy-tlsconsul

Could it be that the default consul storage is used only by ACME part and not by this module ?

mholt commented

@dazoot The file system is still used to store state that is independent from configured storage in some cases. The instance ID is one of those cases. Caddy still uses the file system even if you configure a different storage.

The actual rate limiting state is, however, stored in the configured storage; that is how Caddy coordinates rate limiting as a fleet. But it needs to know its own individual instance ID in order to do that, hence they can't all use the same ID.

Ok. Will close for now.