dfs is a tiny distributed key/value store built on top of
Hashicorp Raft for consensus and
gRPC for the public API. It is intended as a learning example rather
than a production system. The project targets Unix-like environments
with FUSE; Windows platforms are unsupported.
go build ./cmd/dfsEach process hosts a single node. Configuration is supplied through
environment variables prefixed with DFS_. Raft traffic defaults to port
12000 and the gRPC API to 13000.
# start first node
DFS_ID=node1 ./dfs
# start second node and join the first
DFS_ID=node2 DFS_PEERS=node1 ./dfsOnce a leader is elected you can store and retrieve data using any gRPC
client. The USAGE.md file shows examples with grpcurl and Docker
Compose for a three node cluster.
The dfs binary accepts the following flags:
-id– node identifier (defaultnode1).-raft– Raft bind address (default:12000).-grpc– gRPC bind address (default:13000).-data– data directory for Raft state (defaultdata).-peers– comma-separated peer Raft addresses.
The gRPC API exposes two methods defined in proto/dfs.proto:
Putstores a key and opaque byte data.Getretrieves the data for a key.
Examples using grpcurl are available in USAGE.md.
Each node mounts a read-only filesystem at /mnt/dfs backed by a cache
directory /mnt/hostfs. New or modified files written to the cache are
replicated into the DFS and become visible under the mount. See
FUSE.md for details on mounting and watching the cache.
The project is intentionally small:
cmd/dfscontains the entry point and configuration loading.internal/nodewraps a Raft instance.internal/storeimplements the replicated key/value state machine.internal/serverexposes the gRPCFileServicebacked by the store.
New functionality can be added by extending the store and exposing new RPC methods in the server package.