Firewood: Compaction-Less Database Optimized for Efficiently Storing Recent Merkleized Blockchain State
⚠️ Firewood is alpha-level software and is not ready for production use. The Firewood API and on-disk state representation may change with little to no warning.
Firewood is an embedded key-value store, optimized to store recent Merkleized blockchain state with minimal overhead. Firewood is implemented from the ground up to directly store trie nodes on-disk. Unlike most of state management approaches in the field, it is not built on top of a generic KV store such as LevelDB/RocksDB. Firewood, like a B+-tree based database, directly uses the trie structure as the index on-disk. Thus, there is no additional “emulation” of the logical trie to flatten out the data structure to feed into the underlying database that is unaware of the data being stored. The convenient byproduct of this approach is that iteration is still fast (for serving state sync queries) but compaction is not required to maintain the index. Firewood was first conceived to provide a very fast storage layer for the EVM but could be used on any blockchain that requires authenticated state.
Firewood only attempts to store the latest state on-disk and will actively clean up unused state when state diffs are committed. To avoid reference counting trie nodes, Firewood does not copy-on-write (COW) the state trie and instead keeps one latest version of the trie index on disk and applies in-place updates to it. Firewood keeps some configurable number of previous states in memory to power state sync (which may occur at a few roots behind the current state).
Firewood provides OS-level crash recovery via a write-ahead log (WAL). The WAL guarantees atomicity and durability in the database, but also offers “reversibility”: some portion of the old WAL can be optionally kept around to allow a fast in-memory rollback to recover some past versions of the entire store back in memory. While running the store, new changes will also contribute to the configured window of changes (at batch granularity) to access any past versions with no additional cost at all.
Revision
- A historical point-in-time state/version of the trie. This represents the entire trie, including allKey
/Value
s at that point in time, and allNode
s.View
- This is the interface to read from aRevision
or aProposal
.Node
- A node is a portion of a trie. A trie consists of nodes that are linked together. Nodes can point to other nodes and/or containKey
/Value
pairs.Hash
- In this context, this refers to the merkle hash for a specific node.Root Hash
- The hash of the root node for a specific revision.Key
- Represents an individual byte array used to index into a trie. AKey
usually has a specificValue
.Value
- Represents a byte array for the value of a specificKey
. Values can contain 0-N bytes. In particular, a zero-lengthValue
is valid.Key Proof
- A proof that aKey
exists within a specific revision of a trie. This includes the hash for the node containing theKey
as well as all parents.Range Proof
- A proof that consists of twoKey Proof
s, one for the start of the range, and one for the end of the range, as well as a list of allKey
/Value
pairs in between the two. ARange Proof
can be validated independently of an actual database by constructing a trie from theKey
/Value
s provided.Change Proof
- A proof that consists of a set of all changes between two revisions.Put
- An operation for aKey
/Value
pair. A put means "create if it doesn't exist, or update it if it does. A put operation is how you add aValue
for a specificKey
.Delete
- A operation indicating that aKey
that should be removed from the trie.Batch Operation
- An operation of eitherPut
orDelete
.Batch
- An ordered set ofBatch Operation
s.Proposal
- A proposal consists of a baseRoot Hash
and aBatch
, but is not yet committed to the trie. In Firewood's most recent API, aProposal
is required toCommit
.Commit
- The operation of applying one or moreProposal
s to the most recentRevision
.
LEGEND
- Not started
- 🏃 In progress
- Complete
This milestone will focus on additional code cleanup, including supporting concurrent access to a specific revision, as well as cleaning up the basic reader and writer interfaces to have consistent read/write semantics.
- Concurrent readers of pinned revisions while allowing additional batches to commit, to support parallel reads for the past consistent states. The revisions are uniquely identified by root hashes.
- Pin a reader to a specific revision, so that future commits or other operations do not see any changes.
- Be able to read-your-write in a batch that is not committed. Uncommitted changes will not be shown to any other concurrent readers.
- Add some metrics framework to support timings and volume for future milestones To support this, a new method Db::metrics() returns an object that can be serialized into prometheus metrics or json (it implements [serde::Serialize])
This milestone will add support for proposals, including proposed future branches, with a cache to make committing these branches efficiently.
- Be able to support multiple proposed revisions against latest committed version.
- Be able to propose a batch against the existing committed revision, or propose a batch against any existing proposed revision.
- Commit a batch that has been proposed will invalidate all other proposals that are not children of the committed proposed batch.
- Be able to quickly commit a batch that has been proposed.
- Remove RLP encoding
The focus of this milestone will be to support synchronization to other instances to replicate the state. A synchronization library should also be developed for this milestone.
- 🏃 Add support for Ava Labs generic test tool via grpc client
- 🏃 Pluggable encoding for nodes, for optional compatibility with MerkleDB
- Support replicating the full state with corresponding range proofs that verify the correctness of the data.
- Support replicating the delta state from the last sync point with corresponding range proofs that verify the correctness of the data.
- Enforce limits on the size of the range proof as well as keys to make synchronization easier for clients.
- MerkleDB root hash in parity for seamless transition between MerkleDB and Firewood.
- Add metric reporting
- Migrate to a fully async interface, consider tokio_uring, monoio, etc
- Refactor
Shale
to be more idiomatic, consider rearchitecting it
Firewood currently is Linux-only, as it has a dependency on the asynchronous
I/O provided by the Linux kernel (see libaio
). Unfortunately, Docker is not
able to successfully emulate the syscalls libaio
relies on, so Linux or a
Linux VM must be used to run Firewood. We intend to migrate to io_uring which
should allow for this emulation.
There are several examples, in the examples directory, that simulate real world
use-cases. Try running them via the command-line, via cargo run --release --example simple
.
See the release documentation for detailed information on how to release Firewood.
Firewood comes with a CLI tool called fwdctl
that enables one to create and interact with a local instance of a Firewood database. For more information, see the fwdctl README.
cargo test --release
Firewood is licensed by the Ecosystem License. For more information, see the LICENSE file.