Multi-attach for ebs io1 type
Neophyte12345 opened this issue · 7 comments
Is your feature request related to a problem?/Why is this needed
csi-driver currently only supports multi-attach for io2, however io2 is not available in us-gov-west-1 which limits the use of the csi-driver in certain regions.
/feature
Describe the solution you'd like in detail
csi-driver to support provisioning of storage with multi-attach for io1
Describe alternatives you've considered
N/A
Additional context
https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes-multi.html
/kind feature
@Neophyte12345 Can you share your use case details requiring multi-attach with io1/io2 volumes?
@ksliu58 I guess the use case is for gov-cloud users who deploy their workloads in regions that do not offer ebs io2 type. Would be nice to enable the csi-driver to be able to provision ebs io1 with multi-attach capability enabled at provisioning. We have certain apps that require shared ebs volumes to be attached across nodes.
Sorry, I should be more specific. Multi-attach is a nuanced feature, because you will need IO fencing to protect against data corruption. I am curious of your use case that why it requires the volume to be simultaneously accessed by more than 1 node? What type of application will you be running? Do they operate as read/write many access mode?
@ksliu58 The application in question is polyspace access which deploys 2 pods that need a shared volume. 1 is an web server pod and the other is an etl pod which share a volume. We did implement podaffinity to have the pods co-run on the same node to circumvented some of the limitations but thought it would be nice to have a multi-attach ebs volume capability regardless of the node they get scheduled on because sometimes the nodes are not big enough to support all the pods based on resources and type. They operate using access mode RWO. Hope this helps.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Hi @Neophyte12345,
Thank you for your patience, but it looks like not all regions support the multi-attach
feature for io1
volumes. Therefore, this driver feature request will not actually help your use case.
You can confirm whether or not multi-attach for io1
is available in your desired regions by attempting the following an AWS CLI command similar to the following:
aws ec2 create-volume --volume-type io1 --multi-attach-enabled --availability-zone <SOME-AZ>
You will have to reach out to AWS directly for requesting that io1
volumes have multi-attach in your desired regions.