[BUG] rpc error: code = InvalidArgument desc = ControllerPublishVolume Volume capability is not compatible:
ip2cloud opened this issue · 6 comments
In Linode K8s/
AttachVolume.Attach failed for volume "pvc-XXXXXXX" : rpc error: code = InvalidArgument desc = ControllerPublishVolume Volume capability is not compatible: volume_id:"2808205-XXXXXXXX" node_id:"444444" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"444444-555555-linodebs.csi.linode.com" >
This case is similar to #8. If you want to use "local volume" for sharing data among api
and worker
, which are running on separate nodes, you have to create storage that support ReadWriteMany
and define proper storage class through values.yaml
. The log you provided suggest that ext4
, which doesn't suit this purpose, were provisioned through the Container Storage Interface.
Alternatively you may also use externalS3 instead of "local volume".
Hello. Thanks for getting back to us. The case is that I am running on Linode's kubernetes service and it does not allow 2 pods to access the same disk. The disk is not the node that the 2 pods are on, but both cannot access it. Can I create a disk for each pod? If so.. where do I change this in the .yaml and which yaml do I change. Can you help me?
You need to consult linode support to find out the kind of storage class that supports ReadWriteMany
, as implementation of such features differ among different cloud service providers.
I checked with Linode and the Kubernetes service does not actually support ReadWriteMany. :(
What I found was an option to create an NFS server in the same kubernetes network and change the class. (Same solution as EKS). https://startup2scalable.com/2024/02/read-write-many-volumes-on-lke-with-nfs/
But what do you suggest? S3 or NFS?
How do I integrate S3 into Helm?
I checked with Linode and the Kubernetes service does not actually support ReadWriteMany. :(
What I found was an option to create an NFS server in the same kubernetes network and change the class. (Same solution as EKS). https://startup2scalable.com/2024/02/read-write-many-volumes-on-lke-with-nfs/
But what do you suggest? S3 or NFS?
How do I integrate S3 into Helm?
Generally speaking, object storage (e.g. S3) costs less expense than file systems. You may check pricing from your cloud service provider for that. Since we provide no warranty for your service, we could not provide you any kind of advices or conlusions on system design. But you may wish to check this article.
We assume that you are familiar with altering helm configuration via values.yaml
(e.g. helm install -f <your-values.yaml>
). To opt in s3-compatible object storages, fill in this section and set externalS3.enable=true
.
Thanks @BorisPolonsky ! Resolved with S3 Linode Service