kubernetes-csi/docs

Ambiguous terminology `Access Modes (RWO,ROX,RWX)`

hoyho opened this issue · 9 comments

hoyho commented

The table from
https://github.com/kubernetes-csi/docs/blob/master/book/src/drivers.md#production-drivers
has column Supported Access Modes which has value like Read/Write Single Pod or Read/Write Multiple Pods

According to Kubernetes document https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes

My point is that Read/Write Multiple Pods is not equal to read-write by many nodes
case 1 : what if Pods in the same node?
case 2: what if Pods in different node?

Also take a look at csi specification https://github.com/container-storage-interface/spec/blob/4731db0e0bc53238b93850f43ab05d9355df0fd9/csi.proto#L365
it also define AccessMode base on SINGLE_NODE/MULTI_NODE.

It is necessary to clarify Access Mode.

hoyho commented

/assign @saad-ali

There is a long discussion about this issue here: https://groups.google.com/d/msg/container-storage-interface-community/-g_wma86P60/DlVw5_GIBQAJ

We should document the current mapping, in addition to figuring out how to make it better (in a backwards compatible way).

CC @bswartz

I'm fairly confident there is no backwards compatible way to fix this. The least disruptive approach is to relax the CSI spec until it allows what Kubernetes already does, then to add new modes for users that really want the more restrictive behavior.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.