unable to activate logical volumes: Volume group "csi-lvm" not found
m0sh1x2 opened this issue · 2 comments
m0sh1x2 commented
Hello,
I am testing out a local setup with minikube and KubeVirt and I want to use the csi-driver-lvm on a freshly formatted disk but once I set up everything I eventually get some failures:
$ k logs csi-driver-lvm-plugin-f8rvp csi-driver-lvm-plugin
2021/10/22 12:28:45 unable to configure logging to stdout:no such flag -logtostderr
I1022 12:28:45.692895 1 lvm.go:115] pullpolicy: IfNotPresent
I1022 12:28:45.692903 1 lvm.go:119] Driver: lvm.csi.metal-stack.io
I1022 12:28:45.692907 1 lvm.go:120] Version: dev
I1022 12:28:45.780598 1 lvm.go:418] unable to list existing volumegroups:exit status 5
I1022 12:28:45.780621 1 nodeserver.go:51] volumegroup: csi-lvm not found
I1022 12:28:45.988701 1 nodeserver.go:58] unable to activate logical volumes: Volume group "csi-lvm" not found
Cannot process volume group csi-lvm
exit status 5
I1022 12:28:45.989663 1 controllerserver.go:272] Enabling controller service capability: CREATE_DELETE_VOLUME
I1022 12:28:45.989911 1 server.go:95] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
Here are the setps that I take:
- Format the disk fdisk /dev/sdb -> d -> n -> p -> 1 -> Select Sectors -> Format -> w save
- Set up the helm repo:
helm repo add metal-stack https://helm.metal-stack.io
- Install everything:
helm install mytest metal-stack/csi-driver-lvm --set lvm.devicePattern='/dev/sdb1'
- get the pods
k get po
NAME READY STATUS RESTARTS AGE
csi-driver-lvm-controller-0 3/3 Running 0 7m33s
csi-driver-lvm-plugin-v4jfd 3/3 Running 0 9m14s
- Check vgs, pvs, create the example pods, check the storageclasses:
$ kubectl apply -f examples/csi-pvc-raw.yaml
kubectl apply -f examples/csi-pod-raw.yaml
kubectl apply -f examples/csi-pvc.yaml
kubectl apply -f examples/csi-app.yaml
persistentvolumeclaim/pvc-raw unchanged
pod/pod-raw configured
persistentvolumeclaim/csi-pvc unchanged
pod/my-csi-app configured
$ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc Pending csi-lvm-sc-linear 11m
pvc-raw Pending csi-lvm-sc-linear 11m
$ k get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-driver-lvm-linear lvm.csi.metal-stack.io Delete WaitForFirstConsumer true 17m
csi-driver-lvm-mirror lvm.csi.metal-stack.io Delete WaitForFirstConsumer true 17m
csi-driver-lvm-striped lvm.csi.metal-stack.io Delete WaitForFirstConsumer true 17m
standard (default) k8s.io/minikube-hostpath Delete Immediate false 53m
$
$ kubectl apply -f examples/csi-pvc-raw.yaml
kubectl apply -f examples/csi-pod-raw.yaml
kubectl apply -f examples/csi-pvc.yaml
kubectl apply -f examples/csi-app.yaml
persistentvolumeclaim/pvc-raw unchanged
pod/pod-raw configured
persistentvolumeclaim/csi-pvc unchanged
pod/my-csi-app configured
$ k get pvs
error: the server doesn't have a resource type "pvs"
$ k get pvs
error: the server doesn't have a resource type "pvs"
$ k get pv
No resources found
$ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc Pending csi-lvm-sc-linear 11m
pvc-raw Pending csi-lvm-sc-linear 11m
k$k get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-driver-lvm-linear lvm.csi.metal-stack.io Delete WaitForFirstConsumer true 17m
csi-driver-lvm-mirror lvm.csi.metal-stack.io Delete WaitForFirstConsumer true 17m
csi-driver-lvm-striped lvm.csi.metal-stack.io Delete WaitForFirstConsumer true 17m
standard (default) k8s.io/minikube-hostpath Delete Immediate false 53m
Any help would be of a great assistance, thanks.
majst01 commented
You have to ensure that the devicepattern
in the values.yaml matches your disks/partitions.
m0sh1x2 commented
Thanks!
It works like a charm!
Here are the steps that I used in case someoene gets stuck in the future:
helm get values mytest -a > values.yaml
# Set devicePattern to /dev/sdb1 or the name of your disks/partitions
helm upgrade mytest helm/csi-driver-lvm -f values.yaml