/curve

Curve is a high-performance, lightweight-operation, cloud-native open source distributed storage system. Curve can be applied to: 1) mainstream cloud-native infrastructure platforms OpenStack and Kubernetes; 2) high-performance storage for cloud-native databases; 3) cloud storage middleware using S3-compatible object storage as a data storage.

Primary LanguageC++Apache License 2.0Apache-2.0

中文版

CURVE

Docs Releases LICENSE CII Best Practices

Curve is a high-performance, lightweight-operation, cloud-native open source distributed storage system. Can be applied to mainstream cloud-native infrastructure platforms: connect to OpenStack platform to provide high-performance block storage services for cloud VM; connect to Kubernetes to provide RWO, RWX and other types of persistent volumes; connect to PolarFS as a high-performance storage for cloud-native databases, perfectly supports the storage-computing separation architecture of cloud-native databases. Curve can also be used as a cloud storage middleware using S3-compatible object storage as a data storage engine, providing cost-effective shared file storage for public cloud users.

Curve has hosted by the Cloud Native Computing Foundation (CNCF) as a sandbox project.

Curve Architecture

The architecture overview of Curve is as follows:

Curve supports deployment in private cloud and public cloud environment, and can also be used in hybrid cloud. The deployment architecture in private cloud environment is as follows:

The CurveFS shared file storage system can be elastically scaled to public cloud storage, which can provide users with greater capacity elasticity, lower costs, and better performance experience:

Curve Block Service vs Ceph Block Device

Curve: v1.2.0

Ceph: L/N

Performance

Curve random read and write performance far exceeds Ceph in the block storage scenario.

Environment:3 replicas on a 6-node cluster, each node has 20xSATA SSD, 2xE5-2660 v4 and 256GB memory.

Single Vol:

Multi Vols:

Stability

The stability of the common abnormal Curve is better than that of Ceph in the block storage scenario.

Fault Case One Disk Failure Slow Disk Detect One Server Failure Server Suspend Animation
Ceph jitter 7s Continuous io jitter jitter 7s unrecoverable
Curve jitter 4s no effect jitter 4s jitter 4s

Ops

Curve ops is more friendly than Curve in the block storage scenario.

Ops scenarios Upgrade clients Balance
Ceph do not support live upgrade via plug-in with IO influence
Curve support live upgrade with second jitter auto with no influence on IO

Design Documentation

Quick Start of CurveBS

In order to improve the operation and maintenance convenience of Curve, we designed and developed the CurveAdm project, which is mainly used for deploying and managing Curve clusters. Currently, it supports the deployment of CurveBS & CurveFS (scaleout, upgrade and other functions are under development), please refer to the CurveAdm User Manual for related documentation, and install the CurveAdm tool according to the manual before deploying the Curve cluster.

Deploy an all-in-one environment (to try how CURVE works)

Please refer to the CurveBS cluster deployment steps in the CurveAdm User Manual , for the single-machine experience environment, please use the template about "cluster topology file for single-machine deployment".

Deploy on single machine - deprecated method

Deploy multi-machine cluster (try it in production environment)

Please refer to the CurveBS cluster deployment steps in the CurveAdm User Manual , please use the template about "cluster topology file for multi-machine deployment".

Deploy on multiple machines - deprecated method

curve_ops_tool introduction

curve_ops_tool introduction

Quick Start of CurveFS

In order to improve the operation and maintenance convenience of Curve, we designed and developed the CurveAdm project, which is mainly used for deploying and managing Curve clusters. Currently, it supports the deployment of CurveBS & CurveFS, please refer to the CurveAdm User Manual for related documentation, and install the CurveAdm tool according to the manual before deploying the Curve cluster.

Detail for deploying CurveFS cluster: CurveFS ​​deployment

curvefs_tool introduction

curvefs_tool introduction

For Developers

How to participate in the Curve project development is detailed in Curve Community Guidelines

Deploy build and development environment

development environment deployment

Compile test cases and run

test cases compiling and running

FIO curve block storage engine

Fio curve engine is added, you can clone https://github.com/opencurve/fio and compile the fio tool with our engine(depend on nebd lib), fio command line example: ./fio --thread --rw=randwrite --bs=4k --ioengine=nebd --nebd=cbd:pool//pfstest_test_ --iodepth=10 --runtime=120 --numjobs=10 --time_based --group_reporting --name=curve-fio-test

Release Cycle

  • CURVE release cycle:Half a year for major version, 1~2 months for minor version

  • Versioning format: We use a sequence of three digits and a suffix (x.y.z{-suffix}), x is the major version, y is the minor version, and z is for bugfix. The suffix is for distinguishing beta (-beta), RC (-rc) and GA version (without any suffix). Major version x will increase 1 every half year, and y will increase every 1~2 months. After a version is released, number z will increase if there's any bugfix.

Branch

All the developments will be done under master branch. If there's any new version to establish, a new branch release-x.y will be pulled from the master, and the new version will be released from this branch.

Feedback & Contact

  • Github Issues:You are sincerely welcomed to issue any bugs you came across or any suggestions through Github issues. If you have any question you can refer to our FAQ or join our user group for more details.
  • FAQ:Frequently asked question in our user group, and we'll keep working on it.
  • User group:We use Wechat group currently.
  • Double Week Meetings: We have an online community meeting every two weeks which talk about what Curve is doing and planning to do. The time and links of the meeting are public in the user group and Double Week Meetings.