Logical Volume Manager (LVM)
is a software-based tool used for managing disk storage in Linux operating systems, including RHEL. LVM works by abstracting physical storage devices, such as hard disk drives or solid-state drives, into logical volumes that can be resized, moved, and backed up without disrupting the operation of the system.
In RHEL, LVM is implemented as a set of kernel modules that provide support for logical volumes. When LVM is installed, it creates a new layer of abstraction between physical storage devices and the file system. This layer is composed of three main components: physical volumes (PVs), volume groups (VGs), and logical volumes (LVs).
Physical volumes (PV)
are storage devices that are managed by LVM. They can be disks, partitions, or even entire storage arrays. When a physical volume is initialized by LVM, it is divided into a number of physical extents (PEs), which are small, fixed-size units of storage.
Volume groups (VG)
are collections of one or more physical volumes that are grouped together by LVM. A volume group provides a pool of storage space that can be allocated to logical volumes as needed.
Logical volumes (LV)
are virtual disks that are created within volume groups. A logical volume is composed of one or more physical extents, which can be allocated from one or more physical volumes. Logical volumes can be resized dynamically, which means that administrators can add or remove storage capacity without having to shut down or restart the system.
LVM in RHEL provides several advantages over traditional partition-based storage systems. First, LVM allows administrators to allocate storage space more efficiently, by allowing logical volumes to span multiple physical volumes. Second, LVM provides greater flexibility in managing storage resources, by allowing logical volumes to be resized on-the-fly without disrupting the system. Finally, LVM provides enhanced reliability, by providing built-in support for backups and snapshots of logical volumes.
LVM is a powerful tool for managing disk storage in RHEL, providing flexibility, scalability, and reliability to meet the storage needs of modern enterprise applications.
graph LR
LVMStorageOperator((LVMStorageOperator))-->|Manages| LVMCluster
LVMStorageOperator-->|Manages| StorageClass
StorageClass-->|Creates| PersistentVolumeA
StorageClass-->|Creates| PersistentVolumeB
PersistentVolumeA-->LV1
PersistentVolumeB-->LV2
LVMCluster-->|Comprised of|Disk1((Disk1))
LVMCluster-->|Comprised of|Disk2((Disk2))
LVMCluster-->|Comprised of|Disk3((Disk3))
subgraph Logical Volume Manager
Disk1-->|Abstracted|PV1
Disk2-->|Abstracted|PV2
Disk3-->|Abstracted|PV3
PV1-->VG
PV2-->VG
PV3-->VG
LV1-->VG
LV2-->VG
end
In Openshift, the LVM Storage operator (LVMS)
provides a way to manage and automate the creation, deletion, resizing, and backup of logical volumes in an Openshift cluster. The LVMS operator is based on the topoLVM project which provides a Kubernetes volume plugin that allows LVMS to provision and manage LVM storage in a cluster. It aims to provide a simple and reliable way to manage storage, without requiring extensive knowledge of storage systems or configurations.
You can find an architecture diagram of topoLVM here.
Some of the key features of the LVMS operator include:
-
Automation
: The Local Volume Manager Storage (LVMS) operator automates the process of creating and managing logical volumes, reducing the need for manual intervention and making it easier to scale storage resources in an Openshift cluster. -
Dynamic resizing
: The LVMS operator enables dynamic resizing of logical volumes, which means that administrators can easily add or remove storage capacity as needed without having to shut down or restart applications. -
Backup and recovery
: The LVMS operator includes built-in backup and recovery capabilities, which means that administrators can easily create and restore backups of logical volumes in the event of data loss or corruption.
Overall, the LVMS operator is a powerful tool for managing storage resources in an Openshift cluster, providing automation, scalability, and flexibility to meet the needs of modern containerized applications.
- Currently, it is not possible to upgrade from ODF Logical Volume Manager Operator 4.11 to LVM Storage 4.12 on single-node OpenShift clusters. See: 1
- The LVMS operator is only supported in single node Openshift clusters deployed by
Red Hat Advanced Cluster Management (RHACM)
. - You can only create a single instance of the
LVMCluster
custom resource (CR) on an OpenShift Container Platform cluster. - You can make only a single
deviceClass
entry in theLVMCluster
CR. - When a device becomes part of the
LVMCluster
CR, it cannot be removed. - LVM Storage creates a
volume group
using all the available unused disks and creates a single thin pool with a size of 90% of the volume group. The remaining 10% of the volume group is left free to enable data recovery by expanding the thin pool when required. - LVM Storage configures a default overprovisioning limit of 10 to take advantage of the thin-provisioning feature. The total size of the volumes and volume snapshots that can be created on the single-node OpenShift clusters is 10 times the size of the thin pool.
You can install the LVMS Operator using either the Web Console or via RHACM.
The LVMCluster
custom resource is a Kubernetes custom resource that is used in the LVMS Operator on OpenShift. The LVMCluster custom resource is used to create a Logical Volume Manager (LVM) cluster after LVM Storage is installed on OpenShift Container Platform.
Here are the steps to create an LVMCluster custom resource:
- Ensure that the Project selected is
openshift-storage
. - In the OpenShift Container Platform Web Console, click
Operators
→Installed Operators
to view all the installed Operators. - Click on
LVM Storage
, and then clickCreate LVMCluster
underLVMCluster
. - In the
Create LVMCluster
page, select eitherForm view
orYAML view
. - In the
YAML view
, specify the LVMCluster custom resource definition with the required fields such as name, deviceClasses, thinPoolConfig, and nodeSelector.
The LVMCluster custom resource has the following fields:
name
: The name of the LVMCluster custom resource.deviceClasses
: A list of device classes that define the storage devices to use in the LVM cluster.thinPoolConfig
: The configuration for the thin pool that will be created in the LVM cluster.nodeSelector
: A node selector that matches the worker nodes to use in the LVM cluster.
The LVMCluster custom resource also has some optional fields:
tolerations
: A list of node tolerations to apply to the LVMCluster custom resource.deviceSelector
: A device selector that selects the storage devices to use in the LVM cluster. If this field is not included during the LVMCluster creation, it is not possible to add the deviceSelector section to the CR. In this case, the LVMCluster needs to be removed and a new CR needs to be created.
You can provision persistent volume claims (PVCs)
using the storageclass
that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.
Procedure:
-
Identify the
StorageClass
that is created when LVM Storage is deployed. The StorageClass name is in the format,lvms-<device-class-name>
. The device-class-name is the name of the device class that you provided in theLVMCluster
of thePolicy
YAML. For example, if the deviceClass is calledvg1
, then the storageClass name islvms-vg1
. ThevolumeBindingMode
of the storage class is set toWaitForFirstConsumer
. -
To create a PVC where the application requires storage, save the following YAML to a file with a name such as
pvc.yaml
. Example:
# block pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-block-1
namespace: default
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 10Gi
storageClassName: lvms-vg1
---
# file pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-file-1
namespace: default
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
storageClassName: lvms-vg1
- Create the PVC by running the following command:
oc create -f pvc.yaml -ns <application_namespace>
The LVM Storage Operator in Openshift exposes several metrics and alerts that can be used to monitor and manage logical volume manager storage on single node OpenShift clusters. Metrics are available for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack.
The following metrics are exposed by the LVM Storage Operator in Openshift:
topolvm_thinpool_data_percent
topolvm_thinpool_metadata_percent
topolvm_thinpool_size_bytes
**NOTE:**Metrics are updated every 10 minutes or when there is a change in the thin pool, such as a new logical volume creation.
When the thin pool and volume group are filled up, further operations fail and might lead to data loss. LVM Storage sends the following alerts about the usage of the thin pool and volume group when utilization crosses a certain value:
VolumeGroupUsageAtThresholdNearFull
: This alert is triggered when both the volume group and thin pool utilization cross 75% on nodes. Data deletion or volume group expansion is required.VolumeGroupUsageAtThresholdCritical
: This alert is triggered when both the volume group and thin pool utilization cross 85% on nodes.VolumeGroup
is critically full. Data deletion or volume group expansion is required.ThinPoolDataUsageAtThresholdNearFull
: This alert is triggered when the thin pool data utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required.ThinPoolDataUsageAtThresholdCritical
: This alert is triggered when the thin pool data utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required.ThinPoolMetaDataUsageAtThresholdNearFull
: This alert is triggered when the thin pool metadata utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required.ThinPoolMetaDataUsageAtThresholdCritical
: This alert is triggered when the thin pool metadata utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required.
Administrators can add additional capacity to the LVMS Operator by adding additional disks to the LVMCluster
resource either using the command line (oc edit LVMCluster/<name> -n openshift-storage
) or via the web console. As an example, we could add the disk /dev/disk/by-path/pci-0000:89:00.0-nvme-1
by adding it under spec.storage.deviceClasses.deviceSelector.paths
like below:
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
spec:
storage:
deviceClasses:
- name: vg1
deviceSelector:
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
- /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # <= New disk using 'by-path'
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
Alternatively, an administrator could add additional capacity using RHACM. This method edits the same spec.storage.deviceClasses.deviceSelector.paths
as above in the ConfigurationPolicy
object named lvms
.
Administrators can expand LVMS provisioned PVCs simply by using oc patch
commands against the PVC. As an example, you could expand a 2Gi PVC to 3Gi using the syntax below:
# oc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "3Gi" }}}}'
Once the path command is ran, watch the status.conditions
field of the PVC to see if the resize has completed. OpenShift Container Platform adds the Resizing condition to the PVC during expansion, which is removed after the expansion completes.
You can take volume snapshots
of persistent volumes (PVs)
that are provisioned by LVM Storage. To take a volume snapshot, you must make sure you meet the prerequisites below:
- The persistent volume claim (PVC) is in the
Bound
state. This is required for a consistent snapshot. - You stopped all the I/O to the PVC before taking the snapshot.
Once you have confirmed you meet the prerequisites, you can create a new VolumeSnapshot
object that references the PVC you want to take a snapshot of. As an example, to take a snapshot of the PVC lvm-block-1
, you could create the object below:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: lvm-block-1-snap
spec:
volumeSnapshotClassName: lvms-vg1
source:
persistentVolumeClaimName: lvm-block-1
Once this VolumeSnapshot
object is created, a read-only copy of the PVC lvm-block-1
is created as a volume snapshot.
Using volume snapshots you can restore a PVC to a previous state. To do so you must meet the below prerequisites:
- The StorageClass must be the same as the source PVC.
- The size of the requested PVC must be the same as that of the source volume of the snapshot.
In order to restore from the a snapshot, follow the procedure below:
-
First identify the storage class name of the source PVC and volume snapshot name.
-
Save the following YAML to a file representing the PVC using the storageclass name and volume snapshot name from the first step. Save as
lvms-vol-restore.yaml
.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: lvm-block-1-restore
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
Resources:
Requests:
storage: 2Gi
storageClassName: lvms-vg1
dataSource:
name: lvm-block-1-snap
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
- Create the policy by running the following command in the same namespace as the snapshot:
# oc create -f lvms-vol-restore.yaml
A volume clone
is a duplicate of an existing storage volume that can be used like any standard volume. An administrator can create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. The prerequisites to clone a volume are below:
- The PVC is in the
Bound
state. This is required for a consistent clone. - The StorageClass must be the same as the source PVC.
As an example, to create a volume clone of the lvm-block-1
PVC from the lvms-vg1
storageclass, follow the procedure below:
-
Identify the
spec.storageClassName
andmetadata.name
fields of the source PVC. -
Use the storage class from the first step and save the following YAML to a file with a name such as
lvms-vol-clone.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
Metadata:
name: lvm-block-1-clone
Spec:
storageClassName: lvms-vg1
dataSource:
name: lvm-block-1
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
volumeMode: Block
Resources:
Requests:
storage: 2Gi
- Create the clone by running the following command in the same namespace as the snapshot:
# oc create -f lvms-vol-clone.yaml
When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
Run the must-gather command from the client connected to LVM Storage cluster by running the following command:
# oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel8:v4.12 --dest-dir=<directory-name>
More information about the must-gather tool can be found here. Source
- Support multiple storage classes (e.g. for HDD and SSD)
- Support disconnected installations
- Resource reduction (less CPU/memory requested)
- IPV6 dual stack support