openebs-archive/openebs-docs

cStor Volume Expansion

ranjithwingrider opened this issue · 0 comments

There are many cases where cStor Volume has to be increased. For example, capacity might be completely filled up and there by application pod will be in crashloopbackoff state of running state based on the liveness probe in the application. Another scenario is before starting more load on this volume, it can also expand the capacity of the volume to make sure the uninterrupted running of the application.
The following are the prerequisites and steps to be performed to expand the cStor volume.

Prerequisite

  1. Associated all cStor Pool Pods should be in running state. Verify by using kubectl get pod -n <openebs_installed_namespace>
  2. Disable any snapshot schedule running on this volume.
  3. Ensure all CVR which are associated to this cStor Volume are healthy . This can be check using the following command.
    kubectl get cvr -n openebs
    
  4. Verify the number of user created snapshots are present in all associated pools are same.
    This can be done by using following way..
    First exec into the associated pool pod by command.
    kubectl exec -it <associated_pool_pod> -n <openebs_installed_namespace> -bash
    
    Then do zfs list and use the corresponding dataset name in the following command.
    zfs list -t snapshot | grep <dataset_name> | grep -v ".io_snap" | grep -v "rebuild_clone"| grep -v "rebuild_snap" .
    
    If no snapshots are created by user, then it will return as no datasets available .

Overview

  1. Update the size of the corresponding volume in all the associated pool pods.
  2. Update the LUN size in istgt.conf in corresponding cStor target pod.
  3. Go to the node where application is running and run the following command
    • Re-scan iscsi
    • Resize the file-system
  4. Verify the size from the application.
  5. If everything is successful edit/patch the respective cStorvolume and Persistent Volume PV with update size.

Once the above prerequisites are met, follow the steps to expand the cStor Volume.

Step1: Update the size of the corresponding volume in all the associated pool pods.
First exec into the associated pool pod by command.

kubectl exec -it <associated_pool_pod> -n <openebs_installed_namespace> -bash

Then do zfs list and use the corresponding dataset name in the following command to get the current size of the volume.

zfs get volsize <datasetname>

Now update the volume size of the dataset using the following command.

zfs set volsize=<expanded_size> <datasetname>

Verify the size is reflected properly by using the following command.

zfs get volsize <datasetname>

Note: Repeat this step on all associated pool pods.

Step2: Update the LUN size in istgt.conf by following procedure.
First exec into the cstor-istgt container which is running inside of cStor target pod. The corresponding cStor target pod can be get using kubectl get pod -n openebs | grep <pv name>

kubectl exec -it <associated_cStor_Target_pod> -c cstor-istgt -n <openebs_installed_namespace> bash

Then go to cd /usr/local/etc/istgt/
You should edit istgt.conf file. To edit use, vi editor. To install vi editor, use apt-get install vim -y. After vi editor is installed, edit the istgt.conf file to update the LUN0 Storage filed under [LogicalUnit1] section. Verify the TargetName under the Logical section and update the below field with new size and save it.
LUN0 Storage <expanded_szie> 32k
Sample output.

[LogicalUnit1]
  TargetName pvc-803985e7-879e-11e9-836c-42010a8000b1
  TargetAlias nicknamefor-pvc-803985e7-879e-11e9-836c-42010a8000b1
  Mapping PortalGroup1 InitiatorGroup1
  AuthMethod None
  AuthGroup None
  UseDigest Auto
  ReadOnly No
  ReplicationFactor 3
  ConsistencyFactor 2
  UnitType Disk
  UnitOnline Yes
  BlockLength 512
  QueueDepth 32
  Luworkers 6
  UnitInquiry "OpenEBS" "iscsi" "0" "80648584-879e-11e9-836c-42010a8000b1"
  PhysRecordLength 4096
  LUN0 Storage 12G 32k   # update the new size here and save it
  LUN0 Option Unmap Disable
  LUN0 Option WZero Disable
  LUN0 Option ATS Disable
  LUN0 Option XCOPY Disable

Then kill the currently running istgt process. This can be done by below way,
Find out the istgt process using ps -auxwww| grep istgt. The following is a sample output.

root           1  0.0  0.0   4504   752 ?        Ss   11:52   0:00 /bin/sh -c entrypoint-istgtimage.sh
root           7  1.5  0.0 342164  4500 ?        Sl   11:52   0:19 /usr/local/bin/istgt
root         243  0.0  0.0  11284   972 ?        S+   12:12   0:00 grep istgt

Here istgt process pid is 7. So kill this process using kill <istgt_pid>. In this case kill 7. This will restart corresponding cStor target pod. Now you will be out of this target pod session.

Step 3: Go to the node where application is running and check the current size of the volume using lsblk. Then do re-scan of iscsi using following command

sudo iscsiadm -m node -R

Step 4: Verify if the new size is reflected using lsblk command on the node where application pod is running.

Step 5: Resize the filesystem in the same node using the following command.

sudo resize2fs /dev/<device>

For example:
If your openebs volume is mounted on /dev/sdc then use sudo resize2fs /dev/sdc.
Now you are almost done with the expansion.
Step 6: If application is in crashloopbackoff state, try to restart the application pod. Verify if the size inside the application pod.
You can exec into the application pod and you can perform a df -h command to verify the siz e of the mount point.
Step 7: If everything is successful edit/patch the respective cstorvolume and PersistentVolume PV with update size.
To get cstorvolume: kubectl get cstorvolume -n openebs
To get persistentvolume: kubectl get pv