minio/directpv

[QUESTION] How to grow/extend directpvdrives after the filesystem and disk got resized?

gproxyz opened this issue · 8 comments

Hi,
I manually resized disk and extend filesystem.
Now disk is 101G

sdb                     8:16    0  101G  0 disk /var/lib/directpv/mnt/b4a57081-a545-4262-a3b1-d1c0d5eb8976

New size returned by
kubectl get directpvnodes.directpv.min.io -o yaml

    - deniedReason: Used by DirectPV
      fsType: xfs
      fsuuid: f74ec73f-e82e-4b44-9b3b-334c9cb09d7f
      id: 8:16$mNCJgC94nvqBwicvjPBQYLR4jSWdELp5dpcMqjXYNJA=
      majorMinor: "8:16"
      make: QEMU QEMU_HARDDISK
      name: sdb
      size: 108447924224

But CRD directpvdrives.directpv.min.io showed old size
kubectl get directpvdrives.directpv.min.io f74ec73f-e82e-4b44-9b3b-334c9cb09d7f -o yaml

apiVersion: directpv.min.io/v1beta1
kind: DirectPVDrive
metadata:
  finalizers:
  - directpv.min.io/data-protection
  - directpv.min.io.volume/pvc-496f669c-f426-43db-ac37-5d5e8cab7c77
  generation: 9
  labels:
    directpv.min.io/access-tier: Default
    directpv.min.io/created-by: directpv-driver
    directpv.min.io/drive-name: sdb
    directpv.min.io/version: v1beta1
  name: f74ec73f-e82e-4b44-9b3b-334c9cb09d7f
  resourceVersion: "481374914"
  uid: 62e9e257-1e2a-41a2-9737-d4b6b8fb6f2e
spec: {}
status:
  allocatedCapacity: 80583180288
  freeCapacity: 26791002112
  fsuuid: f74ec73f-e82e-4b44-9b3b-334c9cb09d7f
  make: QEMU QEMU_HARDDISK
  status: Ready
  topology:
    directpv.min.io/identity: directpv-min-io
    directpv.min.io/rack: default
    directpv.min.io/region: default
    directpv.min.io/zone: default
  totalCapacity: 107374182400

How to resize directpvdrive?

Just restart node-server pod on the node.

@balamurugana

Just restart node-server pod on the node.

This is first what I am doing. But Its doesn't help
kubectl rollout restart daemonset node-serve
kubectl directpv list drives
Show old size: 100Gb.

┌─────────┬──────┬────────────────────┬─────────┬────────┬─────────┬────────┐
│ NODE    │ NAME │ MAKE               │ SIZE    │ FREE   │ VOLUMES │ STATUS │
├─────────┼──────┼────────────────────┼─────────┼────────┼─────────┼────────┤
│ server1 │ sdb  │ QEMU QEMU_HARDDISK │ 100 GiB │ 25 GiB │ 1       │ Ready  │
│ server1 │ sdc  │ QEMU QEMU_HARDDISK │ 100 GiB │ 25 GiB │ 1       │ Ready  │
│ server1 │ sdf  │ QEMU QEMU_HARDDISK │ 100 GiB │ 25 GiB │ 1       │ Ready  │
│ server1 │ sdg  │ QEMU QEMU_HARDDISK │ 100 GiB │ 25 GiB │ 1       │ Ready  │

directpv version v4.0.4

@gproxyz If you did xfs_growfs and node-server pod restart, but still see crd is not updated, just manually edit it and move forward. As DirectPV is meant for DAS and real drives, it is fine ATM. I will test it locally before confirming this issue.

we currently do not support xfs resizing of initialized drives @balamurugana

this is because, we probe the size from xfs superblock which here is reflecting the older size.

@Praveenrajmani Actually I tested this by adding 512MiB disk first, then resized to block level to 1.5GiB, then xfs_growfs /var/lib/directpv/mnt/<FSUUID> and finally restarted daemonset. It works pretty fine.

Sorry for the long answer.
@balamurugana
In my case, after rebooting the OS
kubectl directpv list drives
shows new size
Perhaps this is because the OS is CentOS 7
But now everything is working fine.
Thank you.

@balamurugana Hi

Can you please write the commands you executed? Because I'm trying to do the same thing again and I'm not getting the disk size updated to the new values.

Here is what I did:

  1. added extra space to the /dev/sdc disk
  2. Executed the command sudo xfs_growfs /var/lib/directpv/directpv/mnt/<FSUUID>.
  3. Restarted the node-server kubectl -n directpv delete pod/node-server-<id>.
    • Checked the size with kubectl directpv list drives --output wide -- old value there.
  4. Executed the command sudo systemctl daemon-reload.
    • Looked the size - there is an old value.
  5. Executed the command sudo systemctl restart snap.microk8s.daemon-kubelite.service.
    • Looked at the size - there is an old value.
  6. Executed the command kubectl rollout restart -n directpv daemonset node-server.
    • Looked at the size - the old value is there.

If I restart the node after step 3, then it has the correct value, not the old one, but I would like to see if it is possible to bypass it without restarting the whole node.

P.s.
I apologize if this request seems silly to you, I'm just learning all this stuff.