openebs/openebsctl

Support creating an cStor Pool Cluster (CSPC) template

kmova opened this issue · 3 comments

kmova commented

Setting up cStor storage is a cumbersome process - that requires users to fetch the appropriate BDs, then create an CSPC yaml file to apply. This can lead to many mis-configurations.

It will be nice to have a cli command that would help user automatically create an CSPC yaml file that can be reviewed by the user and applied into the cluster.

One way to implement this could be:

kubectl openebs template cspc --nodes=node1,node2,node3 [--number-of-devices=2 --raidtype=mirror]

  • provide a generic template subcommand that can be extended for other usecases
  • cspc can be the first template
  • nodes is a mandatory argument - can be one or more node names (obtained via kubectl get nodes).
  • optional: ask for number of block devices to be used (default to 1)
  • optional: ask for raidtype (default to striped)
  • Other optional parameters that can be taken up are - type of block device, min capacity of block device)

Thanks for the info, it sounds really cool & time-saving.

The BlockDevices which are selected for the above one should be in an Active+Unclaimed state, right? I need to confirm if the latest NDM marks the BDs as claimed if there happens to be a consumer application of that BD via some newer CSI driver of another cas-type.

Also is the CLI expected to write the generated template in stdout or open the default editor just like how kubectl edit ABCD EFGH does it, which likely gets applied on exit from a known temporary location in /tmp.

kmova commented

The BlockDevices which are selected for the above one should be in an Active+Unclaimed state, right? I need to confirm if the latest NDM marks the BDs as claimed if there happens to be a consumer application of that BD via some newer CSI driver of another cas-type.

Need to check. But when I look at the output of kubectl openebs get bd it looks promising already. Pick up devices that are active, unclaimed, not-formatted and mounted.

generated template in stdout or open the default editor

we can push to stdout for now, as i am guessing there may be other things user needs to change by getting back to his console in the initial stages of this command development

kmova commented

Let us say, the kubectl openebs get bd shows something like this:

NAME       .              .                         PATH        SIZE      CLAIMSTATE   STATUS   FSTYPE   MOUNTPOINT
gke-kmova-helm-default-pool-c2c6291d-b68p
├─blockdevice-2e97d1c3e5c62f7c20b7195d2ce5711b   /dev/sdc1   375 GiB   Unclaimed    Active
├─blockdevice-6ddb44d6cf3b8a72951d9981195f041a   /dev/sdb    375 GiB   Unclaimed    Active   ext4     /mnt/disks/ssd0
└─blockdevice-b8dacb1b6e477598b0d1607bfa38f061   /dev/sdd    375 GiB   Unclaimed    Active   ext4     /mnt/disks/ssd2
gke-kmova-helm-default-pool-c2c6291d-zbfx
├─blockdevice-a7a0d2f21f51da623d3e8a42290127fa   /dev/sdb    375 GiB   Unclaimed    Active   ext4     /mnt/disks/ssd0
├─blockdevice-d9e282dfc8e66d1752a7323d7f86e7a3   /dev/sdd    375 GiB   Unclaimed    Active   ext4     /mnt/disks/ssd2
└─blockdevice-ea1b94df1e20c7b27c23c1bcd99ca06d   /dev/sdc1   375 GiB   Unclaimed    Active
gke-kmova-helm-default-pool-c2c6291d-0xbq
├─blockdevice-cd9c5bd4a446456edc4639c785e36c31   /dev/sdb    375 GiB   Unclaimed    Active   ext4     /mnt/disks/ssd0
├─blockdevice-fa91beab45b42f602c91dd4c0bbb01c2   /dev/sdc1   375 GiB   Unclaimed    Active
└─blockdevice-fc315f381989aed58b3092601dbc3bcf   /dev/sdd    375 GiB   Unclaimed    Active   ext4     /mnt/disks/ssd2

The output of the template command could be something like:

apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
 name: cstor-disk-pool
 namespace: openebs
spec:
 pools:
   - nodeSelector:
       kubernetes.io/hostname: "gke-kmova-helm-default-pool-c2c6291d-b68p"
     dataRaidGroups:
       - blockDevices:
           # /dev/sdc1   375 GiB
           - blockDeviceName: "blockdevice-2e97d1c3e5c62f7c20b7195d2ce5711b"
     poolConfig:
       dataRaidGroupType: "stripe"
   - nodeSelector:
       kubernetes.io/hostname: "gke-kmova-helm-default-pool-c2c6291d-zbfx"
     dataRaidGroups:
       - blockDevices:
           # /dev/sdc1   375 GiB
           - blockDeviceName: "blockdevice-ea1b94df1e20c7b27c23c1bcd99ca06d"
     poolConfig:
       dataRaidGroupType: "stripe"
   - nodeSelector:
       kubernetes.io/hostname: "gke-kmova-helm-default-pool-c2c6291d-0xbq"
     dataRaidGroups:
       - blockDevices:
           # /dev/sdc1   375 GiB
           - blockDeviceName: "blockdevice-fa91beab45b42f602c91dd4c0bbb01c2"
     poolConfig:
       dataRaidGroupType: "stripe"

If the number of devices or nodes are not specified, we can potentially list all nodes and the suitable devices with some information about them in the comments as shown above. It is much easier now to refine by deleting some entries - rather than having to copy / paste.