kubernetes-csi/csi-driver-host-path

k8s 1.17: csi-hostpath-snapclass is not created during the driver deployment

sedovalx opened this issue · 13 comments

I'm trying to deploy the driver on minikube with --kubernetes-version v1.17.9, so I have installed CRDs and then have run csi-driver-host-path/deploy/kubernetes-1.17/deploy.sh for that. The last line in the deploy log was "deploying snapshotclass based on snapshotter version". And then, when I tried to create a snapshot with examples/csi-snapshot-v1beta1.yaml I've got the following error for the new-snapshot-demo volume snapshot:

$ kubectl describe volumesnapshot                                                                
Name:         new-snapshot-demo
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"new-snapshot-demo","namespace...
API Version:  snapshot.storage.k8s.io/v1beta1
Kind:         VolumeSnapshot
Metadata:
  Creation Timestamp:  2020-09-14T10:21:34Z
  Generation:          1
  Resource Version:    59736
  Self Link:           /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/new-snapshot-demo
  UID:                 1acdcca0-2cbd-4825-b15d-1eb6edd793b8
Spec:
  Source:
    Persistent Volume Claim Name:  csi-pvc
  Volume Snapshot Class Name:      csi-hostpath-snapclass
Status:
  Error:
    Message:     Failed to get snapshot class with error failed to retrieve snapshot class csi-hostpath-snapclass from the informer: "volumesnapshotclass.snapshot.storage.k8s.io \"csi-hostpath-snapclass\" not found"
    Time:        2020-09-14T10:21:34Z
  Ready To Use:  false
Events:
  Type     Reason                  Age    From                 Message
  ----     ------                  ----   ----                 -------
  Warning  GetSnapshotClassFailed  4m23s  snapshot-controller  Failed to get snapshot class with error failed to retrieve snapshot class csi-hostpath-snapclass from the informer: "volumesnapshotclass.snapshot.storage.k8s.io \"csi-hostpath-snapclass\" not found"

In the "deploy-hostpath.sh" I see that the csi-hostpath-snapshotclass.yaml applied for k8s 1.16 only. Is it right? Am I doing something wrong?

The problem happened because I was calling the deploy.sh script from inside the kubernetes-1.17 directory. It should be called from the outside of it.

pohly commented

/reopen

It should be possible to invoke the script anywhere. That's what BASE_DIR

BASE_DIR=$(dirname "$0")
is for.

I'm not sure why it is failing, but the issue seems legitimate.

/help

@pohly: Reopened this issue.

In response to this:

/reopen

It should be possible to invoke the script anywhere. That's what BASE_DIR

BASE_DIR=$(dirname "$0")
is for.

I'm not sure why it is failing, but the issue seems legitimate.

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

BASE_DIR is . if I do

cd deploy/kubernetes-1.17
./deploy.sh

but it contains kubernetes-1.17 if

./deploy/kubernetes-1.17/deploy.sh

MacOS, zsh

pohly commented

Bingo:

driver_version="$(basename "${BASE_DIR}")"

That only works when BASE_DIR includes the "kubernetes-x.yy" name. The check shouldn't make that assumption and instead use
driver_version="$(basename "$(readlink -f "${BASE_DIR}")" )"

Does readlink -f work under MacOS? It's from GNU coreutils.

No, seems it doesn't work on MacOS. Here is an explanation and a workaround https://stackoverflow.com/questions/1055671/how-can-i-get-the-behavior-of-gnus-readlink-f-on-a-mac

pohly commented

@sedovalx a simple cd + pwd should do the trick here. Do you think you can come up with a PR?

Sorry - I don't have much knowledge in bash. Can do if you tell me exactly what to change :)

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.