kubernetes-csi/external-provisioner

Pass namespace annotations to driver when --extra-create-metadata is enabled

rdpsin opened this issue · 16 comments

Currently, external-provisioner passes the PVC name, PVC namespace and PV name to the CSI driver in CreateVolumeRequest.Parameters, when extra-create-metadata is enabled.

It would be useful to also pass namespace annotations to the CSI driver as well. This can be very helpful to cluster admins since a common design is to set boundaries/controls at the namespace level. For e.g., this can allow the CSI driver to attach namespace annotations as tags to the volume for cost allocation, backups, IAM policies, et cetera.

Design:

A new parameter is created: csi.storage.k8s.io/pvc/namespace/annotations where the value is a string of comma separated key-value pairs.

csi.storage.k8s.io/pvc/namespace/annotations: "foo=bar,baz=test"

Note that similar proposals have been put forward, but they include passing PVC annotations to the CSI driver (see #86).

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

/remove-lifecycle rotten

I fully support this request. Today you cannot specify metadata and, in particular, tags in a PVC that are passed down to a CSI driver. Tagging resources is helpful in a multitenant environment for policy tracking and cost charging. Today, the only way to tag managed resources dynamically created by a PVC on a cloud platform is the following:

  • Create a custom storage class and specify tags in the parameters:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: contoso-managed-csi-premium
provisioner: disk.csi.azure.com
parameters:
  skuname: Premium_LRS
  tags: costcenter=1234,tenant=Contoso
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
  • Create a PVC based on the custom storage class:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: contoso-managed-csi-premium
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: contoso-managed-csi-premium
  resources:
    requests:
      storage: 50Gi
  • Use the PVC in your deployment definition

In a multi-tenant environment, this approach can lead to two kinds of problems:

  • Proliferation of custom storage classes, one for each tenant. If you share an AKS cluster across hundreds of tenants each using PVs, and you want to use this technique to tag managed disks, you can quickly end up with hundreds of custom storage classes
  • Storage classes are not namespaced resources, so this can lead to issues when trying to segregate or restrict security using Kubernetes RBAC or Azure AD integrated RBAC

You can use a trick to avoid the proliferation of storage classes:

  • You can create a custom storage class with the proper tags for a tenant
  • Based on the custom storage class, create one or more PVCs, which dynamically create related managed resources on the current cloud platform.
  • Delete the custom storage class.

This solution is doable, but it's not very clean.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

/remove-lifecycle stale

@rdpsin imo, fetching namespace annotation shouldnt be in scope of provisioner sidecar. The API object, provisioner really care about is PVC and its bindings. this looks to be out of scope for the provisioner sidecar to fetch and pass namespace annotation as part of the createvolume request.

cc @jsafrane @msau42 @xing-yang

@rdpsin imo, fetching namespace annotation shouldnt be in scope of provisioner sidecar. The object provisioner really care about is PVC and its bindings. this looks to be out of scope for the provisioner sidecar to fetch and pass as part of the createvolume request.

cc @jsafrane @msau42 @xing-yang

we already supports providing following values in csi-provisioner, this time we want to provide more: pvc.annotations, and that's already supported for csi.storage.k8s.io/provisioner-secret-name fields by csi-provisioner, we just want to make this function more generic in csi-provisioner.

  • ${pvc.metadata.name}
  • ${pvc.metadata.namespace}
  • ${pv.metadata.name}

@andyzhangx looks like there is some confusion here , this issue description says "namespace" annotation to be passed with createVolume

It would be useful to also pass namespace annotations to the CSI driver as well. This can be very helpful to cluster admins since a common design is to set boundaries/controls at the namespace level. 

and iiuc your comment here talks about PVC annotation #714 (comment) . Can you please clarify?

@humblec I meant PVC annotation, related to #86, get namespace's annotation should be out of scope since it's not directly related to PVC.

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.