failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
geosword opened this issue · 11 comments
Helm chart installs, but fails to provision volume. This is the error message reported via
kubectl describe pvc ...:
failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
This is echoed in the pure-provisioner-0
pod:
I0414 15:18:19.675302 1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
I0414 15:20:27.675779 1 controller.go:1199] provision "default/pure-claim" class "pure": started
I0414 15:20:27.680404 1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/pure-claim"
W0414 15:20:27.683067 1 controller.go:887] Retrying syncing claim "1429eb88-e318-4162-992a-b25c25c91349", failure 8
E0414 15:20:27.683112 1 controller.go:910] error syncing claim "1429eb88-e318-4162-992a-b25c25c91349": failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
I0414 15:20:27.683147 1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
This is my test pvc manifest:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
# Referenced in nginx-pod.yaml for the volume spec
name: pure-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: pure
Here is the output from k get sc:
NAME PROVISIONER AGE
pure pure-csi 17m
pure-block pure-csi 17m
pure-file pure-csi 17m
Deployed via helm 3:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
pure-storage storage 1 2020-04-14 16:07:51.941290552 +0100 BST deployed pure-csi-1.1.0 1.1.0
I enabled debug:true
in the chart to see if that gave me any more information, it did not (that I could see)
I also recreated the API token, just incase I had it wrong. It still gives the same result.
Can you use either the pure-block
or pure-file
storageClass, not pure
depending on whether your backend is either a FlashArray or a FlashBlade.
pure
is a deprecated SC that will be removed in the next major release of the CSI driver.
Let us know if that works.
@geosword Could you also let us know which version of PSO CSI driver you are using?
Our latest PSO CSI driver release is 5.1.0
. The version you are installing is determined by the values.yaml
file you supplied.
https://github.com/purestorage/helm-charts/blob/master/pure-csi/values.yaml
image:
name: purestorage/k8s
tag: 5.1.0
pullPolicy: Always
If you had been using PSO for sometime, there's chance your values.yaml
is out of date.
Let us know if the problem is not resolved. We can assign an engineer to work with you offline to review logs. Thank you.
Can you use either the
pure-block
orpure-file
storageClass, notpure
depending on whether your backend is either a FlashArray or a FlashBlade.
pure
is a deprecated SC that will be removed in the next major release of the CSI driver.
Let us know if that works.
Hi @sdodsley Thanks for the response. I tried with both pure-file
and pure-block
both return the same error when I look at k describe pvc...
Our latest PSO CSI driver release is 5.1.0. The version you are installing is determined by the values.yaml file you supplied.
This is a new install. I already have it working on a cluster at another datacenter, with a different pure storage array. The only major difference is that I deployed that with helm 2 as apposed to helm 3 on this cluster.
But just for the avoidance of doubt. This is from k describe pod pure-provisioner-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m48s default-scheduler Successfully assigned storage/pure-provisioner-0 to scdpink8sw2.sec.dc.comodoca.net
Normal Pulling 5m47s kubelet, scdpink8sw2.sec.dc.comodoca.net Pulling image "purestorage/k8s:5.1.0"
Just to show that other methods of talking to the san are working:
:~ # multipath -ll
3624a937084fc180054974bcb0001cfc2 dm-0 PURE,FlashArray
size=1.0G features='2 queue_mode mq' hwhandler='1 alua' wp=rw
`-+- policy='queue-length 0' prio=50 status=active
`- 3:0:0:1 sdb 8:16 active ready running
Also that a connection to the pure storage array web interface is possible:
:~ # telnet scdpinpure.sec.dc.comodoca.net 443
Trying 10.49.2.10...
Connected to scdpinpure.sec.dc.comodoca.net.
Escape character is '^]'.
Hi @geosword:
Would you mind to share this the helm install information with --dry-run --debug
flags.
For example:
helm install -f pure-fa-iscsi-values.yaml pure-storage-driver --namespace pso ./pure-csi --dry-run --debug
Sure...Its here https://pastebin.com/wQyS7dVX
Here's the command I'm using to install
helm3 install pure-storage pure/pure-csi --namespace storage -f pure-csi-values.yml
pure-csi-values.yml consists of:
---
FlashArrays:
- APIToken: "<redacted>"
MgmtEndPoint: "scdpinpure.sec.dc.comodoca.net"
app:
debug: true
I had already tested communications to the array from the workers, and as the output shows, it works. I just tested from a debian container from within the same namespace:
root@debian:/# wget -S -O /dev/null https://scdpinpure.sec.dc.comodoca.net
--2020-04-14 20:09:56-- https://scdpinpure.sec.dc.comodoca.net/
Resolving scdpinpure.sec.dc.comodoca.net (scdpinpure.sec.dc.comodoca.net)... 10.49.2.10
Connecting to scdpinpure.sec.dc.comodoca.net (scdpinpure.sec.dc.comodoca.net)|10.49.2.10|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Tue, 14 Apr 2020 20:09:56 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 1047
Connection: keep-alive
Set-Cookie: JSESSIONID=node01rw7kgiqlzb4ph2z2iohc8t6348.node0;Path=/;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
X-Frame-Options: DENY
Content-Language: en-US
Strict-Transport-Security: max-age=31536000; includeSubDomains;
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Length: 1047 (1.0K) [text/html]
Saving to: '/dev/null'
/dev/null 100%[==========================================================================================================================================>] 1.02K --.-KB/s in 0s
2020-04-14 20:09:56 (73.8 MB/s) - '/dev/null' saved [1047/1047]
root@debian:/#
This works too.
just trying to make sure there arent connection problems somewhere.
I've just noticed this difference between the cluster and array that is working vs the one Im having problems with:
working:
Purity//FA
5.2.7
Not working:
Purity//FA
5.1.12
I also noticed that the tag of purestorage/k8s
is 5.0.8
on the working cluster and 5.1.0
on the not working cluster, so I tried using the values file to override the tag, which it did, but still reported the same error when trying to provision a pvc.
Hi Can you change your yaml like this:
---
arrays:
FlashArrays:
- APIToken: "<redacted>"
MgmtEndPoint: "scdpinpure.sec.dc.comodoca.net"
app:
debug: true
I think you miss arrays:
.
!"^$£"^%£$!"^%£$!"^%£$^%$^&(^&^!$!!!!!
:D
Thank you, and sorry to waste your time!
No worries :) we are more than happy to help! We'll add more checks in the future to give more hints for users for this case like this.