DevOps-Nirvana/Kubernetes-Volume-Autoscaler

Random feature ideas to consider (see here if you wish to contribute)

AndrewFarley opened this issue · 0 comments

  • Listen/watch to events of the PV/PVC, or listen/read from Prometheus to monitor and ensure the resizing happens, log and/or slack it accordingly
  • Catch WHY resizing failed (try?) and make sure to log/send to slack/k8s events the why
  • Check if storage class has the ALLOWVOLUMEEXPANSION to (help?) ensure the expansion will succeed, do something about it and/or report it
  • Add full helm chart values documentation markdown table (tie into adding docs for Universal Helm Charts)
  • Build and published helm chart in a Github Action and push the static yaml as well, make things "easier" and automated
  • Add badges to the README regarding Github Actions success/failures
  • Add tests coverage to ensure the software works as intended moving forward
  • Make per-PVC annotations to (re)direct Slack to different webhooks and/or different channel(s) and/or different message prefix/suffixes
  • Add better examples for "best practices" when using Helm (aka, subchart)
  • Test it and add working examples of using this on other cloud providers (Azure / Google Cloud)
  • Auto-detect (or let user choose) a different provider (eg: AWS/Google) and set different per-provider defaults (eg: wait time, min/max disk size, min disk increment, etc)
  • Digest message all changes made in a single loop into one big message to prevent Slack spam
  • Make it possible to autoscale up node's root volumes? This will require significant engineering, as this would require talking to the provider APIs, and require an IAM/OIDC/AccessKey/etc to be able to scale the root volume up. Likely very challenging, but would be ideal to handle. Consider writing a new service dedicated for this instead.
  • Add filter to allow users to specify only which StorageClasses to support, default of "all" or "*"

NOTE: If anyone wants us to prioritize any of these issues, please say something.