OpenShift Data Foundation (ODF) provides a scalable, software-defined storage solution integrated with OpenShift, using Ceph for dynamic storage provisioning. It ensures high availability, data protection, and persistent storage for containerized applications. ODF supports block, file, and object storage, catering to diverse application needs. It simplifies storage management and enhances operational efficiency within OpenShift environments.
To ensure that your OpenShift cluster is properly configured with storage nodes, follow the steps below to label and taint the nodes appropriately. There are two alternatives. If you already have nodes for ODF, use Option 2
. If you need to create them and use IPI AWS, please, use Option 1
.
For optimal performance, particularly with storage-intensive workloads, we recommend using the following AWS instance type:
-
m6id.4xlarge
This instance type provides a good balance of compute, memory, and local NVMe storage, making it suitable for OpenShift storage nodes.
ℹ️
|
If your cluster is not on AWS, check the recommendations from your cloud provider. |
for az in a b c; do \
oc process -f prerequisites/template-odf-nodes.yaml \
-p INFRASTRUCTURE_ID=$(oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster) \
-p INSTANCE_TYPE="m6id.4xlarge" -p AZ=$az | \
oc apply -n openshift-machine-api -f -
done
❗
|
The previous Template already provided labels and taints for the ODF nodes. Do not execute the following commands if you created the nodes using the prerequisites template.
|
Define the nodes that will be used for storage. Next, apply the necessary labels and taints to these nodes:
ODF_NODES="nodeA nodeB nodeC"
for node in $ODF_NODES; do oc label node ${node}.eu-west-1.compute.internal cluster.ocs.openshift.io/openshift-storage=""; done
for node in $ODF_NODES; do oc label node ${node}.eu-west-1.compute.internal node-role.kubernetes.io/infra=""; done
for node in $ODF_NODES; do oc adm taint node ${node}.eu-west-1.compute.internal node.ocs.openshift.io/storage="true":NoSchedule; done
These commands will:
* Label the nodes for OpenShift Data Foundation (OCS) storage.
* Assign the nodes to the infra
role.
* Taint the nodes to ensure they are used exclusively for storage, preventing other workloads from being scheduled on them.
In this repository, we have automated the deployment and configuration of the OpenShift environment using GitOps principles, managed through ArgoCD. Create the ArgoCD Application using the following command:
oc apply -f application-ocp-odf.yaml
Key components include:
-
job-annotate-storageclass: A job designed to automate the annotation of storage classes, ensuring that the appropriate default settings are applied to your storage configurations.
-
job-enable-console-plugin: This job enables specific console plugins, like the ODF console, to enhance the user experience and provide additional functionality in the OpenShift web console.
-
lso-operator: Deploys and configures the Local Storage Operator (LSO), managing local storage resources and ensuring they are available for use in the cluster.
-
odf-operator: Deploys the OpenShift Data Foundation (ODF) operator, which is essential for managing storage solutions within the cluster, providing integrated and scalable storage services.
-
storagecluster-ocs-storagecluster: Defines and deploys the OpenShift Container Storage (OCS) storage cluster, ensuring high availability and robust storage management for the cluster.
ℹ️
|
This section is based on this Blog. |
There are multiple ways to tackle this problem, including:
-
The Red Hat OpenShift Application Data Protection (OADP) operator. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs).
-
OpenShift Data Foundations (ODF) Disaster Recovery tooling which provides synchronized or asynchronous data replication between clusters, ensuring minimal downtime and data loss during site or regional failures. These solutions integrate with Red Hat’s Advanced Cluster Management for automated failover and recovery
-
Various third-party solutions.
OADP is the OpenShift API for Data Protection operator. This open source operator sets up and installs Velero on the OpenShift platform, allowing users to backup and restore applications. You can install it with the following ArgoCD Application
:
oc apply -f application-ocp-oadp.yaml
Understanding the interaction between StorageClass and PersistentVolumeClaim (PVC) volume modes is crucial for correctly setting up storage in Kubernetes.
-
StorageClass volumeMode: A StorageClass can provision storage in two main modes:
-
Block: The storage is provisioned as a raw block device.
-
Filesystem: The storage is provisioned as a filesystem. This is the most common setup, where Kubernetes automatically creates a filesystem on the volume.
-
-
PVC volumeMode: The volumeMode in a PVC determines how the provisioned storage is presented to the pod:
-
Filesystem: The volume is mounted as a filesystem, which is typically the default and most common mode. Kubernetes handles the filesystem creation, making the volume ready for file-based operations.
-
Block: The volume is exposed as a raw block device, which can be useful for applications requiring direct disk access.
-
More info here.
-
ℹ️
|
The most common configuration is filesystem for the PVC.
|