/atomix

Wait-free distributed coordination framework for building distributed systems on the Raft consensus algorithm

Primary LanguageJavaApache License 2.0Apache-2.0

Build Status Maven Central Gitter

Persistent • Consistent • Fault-tolerant • Asynchronous • Database • Coordination • Framework

Atomix is a high-level asynchronous framework for building fault-tolerant distributed systems. It combines the consistency of ZooKeeper with the usability of Hazelcast to provide tools for managing and coordinating stateful resources in a distributed system. Its strongly consistent, fault-tolerant data store is designed for such use cases as:

// Get a distributed lock
DistributedLock lock = atomix.getLock("my-lock").get();

// Acquire the lock
CompletableFuture<Void> future = lock.lock();

// Once the lock is acquired, release the lock
future.thenRun(() -> lock.unlock());
// Get a distributed membership group
DistributedMembershipGroup group = atomix.getMembershipGroup("my-group").get();

// Join the group
group.join().thenAccept(member -> {

  // Leave the group
  member.leave();

});

// When a member joins the group, print a message
group.onJoin(member -> System.out.println(member.id() + " joined!"));

// When a member leaves the group, print a message
group.onLeave(member -> System.out.println(member.id() + " left!"));
// Join a group
CompletableFuture<LocalMember> future = group.join();

// Once the member has joined the group, register an election listener
future.thenAccept(member -> {
  member.onElection(term -> {
    System.out.println("Elected leader!");
    member.resign();
  });
});
// Get a distributed topic
DistributedTopic<String> topic = atomix.getTopic("my-topic");

// Register a message consumer
topic.consumer(message -> System.out.println(message));

// Publish a message to the topic
topic.publish("Hello world!");
// Get a distributed long
DistributedLong counter = atomix.getLong("my-long").get();

// Increment the counter
long value = counter.incrementAndGet().get();
// Get a distributed map
DistributedMap<String, String> map = atomix.getMap("my-map").get();

// Put a value in the map
map.put("atomix", "is great!").join();

// Get a value from the map
map.get("atomix").thenAccept(value -> System.out.println("atomix " + value));

...and much more

Examples

Users are encouraged to explore the examples in the /examples directory. Perhaps the most interesting/revelatory example is the leader election example. This example demonstrates a set of replicas that elect a leader among themselves.

To run the leader election example:

  1. Clone this repository: git clone --branch master git@github.com:atomix/atomix.git
  2. Navigate to the project directory: cd atomix
  3. Compile the project: mvn package
  4. Run the following three commands in three separate processes from the same root directory of the project:
java -jar examples/leader-election/target/atomix-leader-election.jar logs/server1 localhost:5000 localhost:5001 localhost:5002
java -jar examples/leader-election/target/atomix-leader-election.jar logs/server2 localhost:5001 localhost:5000 localhost:5002
java -jar examples/leader-election/target/atomix-leader-election.jar logs/server3 localhost:5002 localhost:5000 localhost:5001

Each instance of the leader election example starts an AtomixReplica, connects to the other replicas in the cluster, creates a DistributedLeaderElection, and awaits an election. The first time a node is elected leader it will print the message: "Elected leader!". When one of the processes is crashed, a new process will be elected a few seconds later and again print the message: "Elected leader!".

Note that the same election process can be done with AtomixClients as well. Atomix provides the concept of stateful nodes (replicas) which store resource state changes on disk and replicate changes to other replicas, and stateless nodes (clients) which operate on resources remotely. Both types of nodes can use the same resources in the same ways. This makes Atomix particularly well suited for embedding in server-side technologies without the overhead of a Raft server on every node.

See the website for documentation and examples.