Error mounting imported snapshot
psincraian opened this issue ยท 5 comments
What did you do?
I have a mongodb installed in a droplet, and I want to migrate it to k8s. The mongodb droplet has a volume attached to it where I have all the data. I created a snapshot of the volume and I followed the import snapshot guide to import the data to the mongo pod. The pod with a new volume without using snapshot works as expected, but when I want to import the snapshot to a new PV then I get an error.
What did you expect to happen?
The importing of a snapshot and mounting to the PV to work. But I get the following error
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 34s default-scheduler Successfully assigned default/task-pv-pod to pepy-cluster-mmlco
Normal SuccessfulAttachVolume 32s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-84afc954-100d-446a-b102-5cccd9a6faa3"
Warning FailedMount 12s (x6 over 28s) kubelet MountVolume.MountDevice failed for volume "pvc-84afc954-100d-446a-b102-5cccd9a6faa3" : rpc error: code = Internal desc = mounting failed: exit status 255 cmd: 'mount -t ext4 /dev/disk/by-id/scsi-0DO_Volume_pvc-84afc954-100d-446a-b102-5cccd9a6faa3 /var/lib/kubelet/plugins/kubernetes.io/csi/dobs.csi.digitalocean.com/fe73301729091014250d5739d47348212dfea5e54285731b4f37b36f8138ae32/globalmount' output: "mount: mounting /dev/disk/by-id/scsi-0DO_Volume_pvc-84afc954-100d-446a-b102-5cccd9a6faa3 on /var/lib/kubelet/plugins/kubernetes.io/csi/dobs.csi.digitalocean.com/fe73301729091014250d5739d47348212dfea5e54285731b4f37b36f8138ae32/globalmount failed: Invalid argument\n"
Configuration (MUST fill this out):
Files
nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: pepy-mongo-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/"
name: storage
pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pepy-mongo-pvc
spec:
dataSource:
name: snapshot-manual
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
snapshot.yml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: snapshot-manual
spec:
volumeSnapshotClassName: do-block-storage
source:
volumeSnapshotContentName: snapshotcontent-manual
snapshotcontent.yml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: snapshotcontent-manual
spec:
deletionPolicy: Retain
driver: dobs.csi.digitalocean.com
source:
snapshotHandle: 3a1f36bb-6841-11ed-ba59-0a58ac1456fb
volumeSnapshotRef:
name: snapshot-manual
namespace: default
-
Kubernetes Version: Kubernetes 1.24.4-do.0
-
Cloud provider/framework version, if applicable (such as Rancher): N/A
๐
The failure seems to be with mounting. Is the original volume an ext4 file system as well?
Hey @timoreimann
I think you are right, the file system is not ext4 ๐คฆโโ๏ธ
Thanks for your help! ๐
mount
error messages can be pretty hard to make sense out of. Glad we figured it out. ๐
Hey @timoreimann, sorry to bother you, but how can I specify the file system type of the snapshot? I tried to find in the docs but I cannot see it ๐ค
Hey @psincraian ๐ you need to specify a StorageClass which has the desired file system configured. FWIW, in DOKS we pre-provide a number of popular StorageClass configurations next to the standard one. If none fits you, you can create a new one and reference it in your object(s).