purestorage/helm-charts

ISCSI CSI Plugin mount fail

Closed this issue · 9 comments

Hi,
I am implementing a flasharray on kubernetes, I have deployed the csi helm chart and I can see that persistent volume claim are correctly created.
When I am trying to use the pvc in a pod, it stay in ContainerCreating, logs on the node is not really useful; all I have is
(durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-f33cc299-070a-11ea-b821-54802852354e\" (UniqueName: \"kubernetes.io/csi/pure-csi^k8s/pvc-f33cc299-070a-11ea-b821-54802852354e\") pod \"postgres-56448cf857-p2s44\" (UID: \"f9c8408d-070a-11ea-b9c7-5480284e548e\") "

Nothing really useful on the csi pods part.

My setup looks like :

  • 11 bare-metal servers running kubernetes 1.13.12
  • Calico as CNI (network policies)
  • Docker CE 18.09
  • Ubuntu 18.04

We have one vlan nic for iscsi traffic, I can contact the array without any issues; tried to verify if a volume created on the san array can be visible from iscsiadm and it work.

Got multipath too but not really configured for the flasharray.

On the san array side, I am using an array admin api token (which is for me too much but as it's not working who cares....), I can't see hosts added (which is said by a purestorage presales to be automatic..).

I feel a little bit stuck, I don't see where I could be wrong.

Do you have any advice to get more detailed logs ?

Or if I'm wrong please tell me nobody's perfect :-)

Thanks

2vcps commented

Do you have the feature gates for CSI enabled on the kubelet for all the nodes?
https://github.com/purestorage/helm-charts/tree/master/pure-csi#additional-configuration-for-kubernetes-113-only

2vcps commented

Also, please open a support request with Pure Support so it can be tracked internally for your FlashArray.

It might be worth checking if you have a proper iscsi iqn generated and ready for use.
Please check /etc/iscsi/initiatorname.iscsi, it should say something like "InitiatorName=".

Could you also please post the output of "kubectl describe pod" for the pod that is not running ?

Do you have the feature gates for CSI enabled on the kubelet for all the nodes?
https://github.com/purestorage/helm-charts/tree/master/pure-csi#additional-configuration-for-kubernetes-113-only

No, in fact I didn't realize that I really needed feature flags as CSI was supposed to be GA in kubernetes 1.13....

I'll dig a bit more thanks

It might be worth checking if you have a proper iscsi iqn generated and ready for use.
Please check /etc/iscsi/initiatorname.iscsi, it should say something like "InitiatorName=".

I already generates unique iqn for earch hosts

By the way just got it working using flexvolume !

Is feature flag for CSI is supposed to be stable for a production use in K8s 1.13 ?

Thanks for sharing the update.

Is feature flag for CSI is supposed to be stable for a production use in K8s 1.13 ?

We rely on the CSIDriver object functionality in Kubernetes.
This page (https://kubernetes-csi.github.io/docs/csi-driver-object.html#enabling-csidriver-on-kubernetes) says that it is still alpha for 1.13, so yes, you will need to ensure that this featuregate is enabled in your cluster.

BTW, just curious. If this is new deployment, wouldn't you want to run a more recent version of Kubernetes (like 1.15?) :)