kubernetes-sigs/gcp-compute-persistent-disk-csi-driver

Feature request: Support specifying PD labels using PVC annotations

Ironlink opened this issue · 6 comments

We use PD labels to indicate which team owns a disk, and which product feature a disk is related to. We do this for the purposes of filtering in UIs, as well as for cost analysis.

Currently, PD labels can only be set through the StorageClass. This is inconvenient for us as it leads to an unnecessary number of storage classes, most of which are only used by a small handfull of disks. It would be much more convenient if we could specify disk labels through an annotation on the PersistentVolumeClaim. Is this capability something you can add to the pd csi driver?

In the past the per-storage class labels have worked fine for the use cases I've heard about.

You raise a good point. However to my knowledge the only attributes plumbed through to the CSI request are from the storage class. Plumbing annotations from the PVC itself would require changes to the provisioner sidecar. Maybe it would be best to raise an issue on https://github.com/kubernetes-csi/external-provisioner and see what the opinion is?

/kind feature

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

/lifecycle frozen

Is there a GCP feature request I can start for this? Can one be created please?

The issue was raised on the external-provisioner, as that would be the right kubernetes layer to do this (the pd csi driver doesn't see the k8s objects so can't see annotations).

kubernetes-csi/external-provisioner#760.

Looks like it got auto-closed but I'm sure if you're interested in working on it it could be re-opened.