kubernetes-sigs/gcp-compute-persistent-disk-csi-driver

Tuning ProvisionedIOPS for Extreme Persistent Disk

mbiagetti opened this issue · 7 comments

I'm developing a tool where the customer can specify the IOPS of an pd-extreme disk.
Are storage classes the only way to specify IOPS for a Volume? Can this is be passed somehow via a PVC?

I also try to set that value after the creation but seems not possible (the api does not support that feature).

Any tips/advice?

Do you know if this could be supported in the future?

If you use the latest version of pd csi driver which is 1.9.1, you can create pd-extreme with SC parameters to specify iops, here is an example yaml file.

However after creation if you want to update the iops value to something else, currently you need to do that via PD apis directly. We are still working on updating iops in OSS k8s.

Does this answer your question?

Extreme PDs can only have their IOPS set at creation time anyway, so the storage class API in 1.9.1 should be enough.

(it's also in 1.9.0, but 1.9.1 upgrades the base image to pick up some CVEs).

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

/lifecycle stale

/lifecycle frozen