/raft-engine

A WAL-is-data engine that used to store multi-raft log

Primary LanguageRustApache License 2.0Apache-2.0

Raft Engine

Rust codecov Docs crates.io

Raft Engine is a persistent embedded storage engine with a log-structured design similar to bitcask. It is built for TiKV to store Multi-Raft logs.

Features

  • APIs for storing and retrieving protobuf log entries with consecutive indexes
  • Key-value storage for individual Raft Groups
  • Minimum write amplification
  • Collaborative garbage collection
  • Supports lz4 compression over log entries
  • Supports file system extension

Design

Raft Engine consists of two basic constructs: memtable and log file.

In memory, each Raft Group holds its own memtable, containing all the key value pairs and the file locations of all log entries. On storage, user writes are sequentially written to the active log file, which is periodically rotated below a configurable threshold. Different Raft Groups share the same log stream.

Write

Similar to RocksDB, Raft Engine provides atomic writes. Users can stash the changes into a log batch before submitting.

The writing of one log batch can be broken down into three steps:

  1. Optionally compress the log entries
  2. Write to log file
  3. Apply to memtable

At step 2, to group concurrent requests, each writing thread must enter a queue. The first in line automatically becomes the queue leader, responsible for writing the entire group to the log file.

Both synchronous and non-sync writes are supported. When one write in a batch is marked synchronous, the batch leader will call fdatasync() after writing. This way, buffered data is guaranteed to be flushed out onto the storage.

After its data is written, each writing thread will proceed to apply the changes to memtable on their own.

Garbage Collection

After changes are applied to the local state machine, the corresponding log entries can be compacted from Raft Engine, logically. Because multiple Raft Groups share the same log stream, these truncated logs will punch holes in the log files. During garbage collection, Raft Engine scans for these holes and compacts log files to free up storage space. Only at this point, the unneeded log entries are deleted physically.

Raft Engine carries out garbage collection in a collaborative manner.

First, its timing is controlled by the user. Raft Engine consolidates and removes its log files only when the user voluntarily calls the purge_expired_files() routine. For reference, TiKV calls it every 10 seconds by default.

Second, it sends useful feedback to the user. Each time the GC routine is called, Raft Engine will examine itself and return a list of Raft Groups that hold particularly old log entries. Those log entries block the GC progress and should be compacted by the user.

Using this crate

Put this in your Cargo.toml:

[dependencies]
raft-engine = "0.1.0"

Available Cargo features:

  • scripting: Compiles with Rhai. This enables script debugging utilities including unsafe_repair.
  • nightly: Enables nightly-only features including test.
  • internals: Re-exports key components internal to Raft Engine. Enabled when building for docs.rs.
  • failpoints: Enables fail point testing powered by tikv/fail-rs.
  • swap: Use SwappyAllocator to limit the memory usage of Raft Engine. The memory budget can be configured with "memory-limit". Depending on the nightly feature.

See some basic use cases under the examples directory.

Contributing

Contributions are always welcome! Here are a few tips for making a PR:

  • All commits must be signed off (with git commit -s) to pass the DCO check.
  • Tests are automatically run against the changes, some of them can be run locally:
cargo fmt --all -- --check
cargo +nightly clippy --all --all-features --all-targets -- -D clippy::all
cargo +nightly test --all --features all_except_failpoints
cargo +nightly test --test failpoints --all-features -- --test-threads 1
  • For changes that might induce performance effects, please quote the targeted benchmark results in the PR description. In addition to micro-benchmarks, there is a standalone stress test tool which you can use to demonstrate the system performance.
cargo +nightly bench --all-features <bench-case-name>
cargo run --release --package stress --help

License

Copyright (c) 2017-present, PingCAP, Inc. Released under the Apache 2.0 license. See LICENSE for details.