/rawfile-localpv

Dynamically deploy Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is provisioned from RAW-device file loop mounted Local-Hostpath storage.

Primary LanguagePythonApache License 2.0Apache-2.0

RawFile-LocalPV [Experimental]

The RawFile-LocalPV OpenEBS Data-Engine is a similar but more flexible, yet more complex derivation of the LocalPV-Hostpath Data-Engine.

This are a few reasons to consider using node-based (rather than network-based) storage architetcure:

  • Performance: Almost no network-based storage solution can keep up with baremetal disk performance in terms of IOPS/latency/throughput combined. And you’d like to get the best out of the SSD you’ve got!
  • On-premise Environment: You might not be able to afford the cost of upgrading all your networking infrastructure, to get the best out of your network-based storage solution.
  • Complexity: Network-based solutions are distributed systems. And distributed systems are not easy! You might want to have a system that is easier to understand and to reason about. Also, with less complexity, you can fix unpredicted issues more easily.

Using node-based storage

The OpenEBS LocalPV-HostPath Data-Engine makes it pretty easy to automatically provision hostPath PVs and use them in your workloads. But, there are known limitations though:

Important

  • You can’t monitor volume usage: There are hacky workarounds to run “du” regularly, but that could prove to be a performance killer, since it could put a lot of burden on your CPU and cause your filesystem cache to fill up. Not really good for a production workload.
  • You can’t enforce hard limits on your volume’s size: Again, you can hack your way around it, with the same caveats.
  • You are stuck with whatever filesystem your kubelet node is offering.
  • You can’t customize your filesystem.

### All the above issues stem from the same root cause: - hostPath/LocalPVs are simple bind-mounts from the host filesystem into the pod.

The idea behind RawFile-LocalPV

To use a Filesystem based 'extent file' as the emulated block device (i.e. a soft-LUN block device), and leverage the LINUX loop device to associate that soft-LUN file as a complete flexibe block device (i.e. an emulated soft disk device). At this point you can create a PV with a fileystem on it. This allows you to...

Note

  • You can monitor volume usage by running df -hT in O(1) since each soft-LUN block device is mounted separately on the local node (displaying utilization status/metrics or each mountpoint).
  • The size limit is enforced by the operating system, based on the backing file system capacity and soft-lun device file size.
  • Since volumes are backed by different files, each soft-lin device file can be formatted using different filesystems, and/or customized with different filesystem options.

Prerequisites


  • Kubernetes: 1.21+

Installation

helm install -n kube-system rawfile-csi ./deploy/charts/rawfile-csi/

Usage

Create a StorageClass with your desired options:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-sc
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Features

  • Direct I/O: Near-zero disk performance overhead
  • Dynamic provisioning
  • Enforced volume size limit
  • Access Modes
    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • Volume modes
    • Filesystem mode
    • Block mode
  • Volume metrics
  • Supports fsTypes: ext4, btrfs, xfs
  • Online expansion: If fs supports it (e.g. ext4, btrfs, xfs)
  • Online shrinking: If fs supports it (e.g. btrfs)
  • Offline expansion/shrinking
  • Ephemeral inline volume
  • Filesystem-level snapshots: btrfs supported

License

FOSSA Status