Pronounced R-aktor
A pure-Rust actor framework. Inspired from Erlang's gen_server
, with the speed + performance of Rust!
ractor
tries to solve the problem of building and maintaing an Erlang-like actor framework in Rust. It gives
a set of generic primitives and helps automate the supervision tree and management of our actors along with the traditional actor message processing logic. It's built heavily on tokio
which is a
hard requirement for ractor
.
ractor
is a modern actor framework written in 100% rust with NO unsafe
code.
Additionally ractor
has a companion library, ractor_cluster
which is needed for ractor
to be deployed in a distributed (cluster-like) scenario. ractor_cluster
is not yet ready for public release, but is work-in-progress and coming shortly!
There are other actor frameworks written in Rust (Actix, riker, or just actors in Tokio) plus a whole list compiled on this Reddit post
Ractor tries to be different my modelling more on a pure Erlang gen_server
. This means that each actor can also simply be a supervisor to other actors with no additional cost (simply link them together!). Additionally we're aiming to maintain close logic with Erlang's patterns, as they work quite well and are well utilized in the industry.
Additionally we wrote ractor
without building on some kind of "Runtime" or "System" which needs to be spawned. Actors can be run independently, in conjunction with other basic tokio
runtimes with little additional overhead.
We currently have full support for:
- Single-threaded message processing
- Actor supervision tree
- Remote procedure calls to actors
- Timers
- Named actor registry (
ractor::registry
) from Erlang'sRegistered processes
- Process groups (
ractor::pg
) from Erlang'spg
module
On our roadmap is to add more of the Erlang functionality including potentially a distributed actor cluster.
Install ractor
by adding the following to your Cargo.toml dependencies
[dependencies]
ractor = "0.7"
ractor
exposes a single feature currently, namely
cluster
which exposes various functionality required forractor_cluster
to setup and manage a cluster of actors over a network link. This is work-in-progress and is being tracked in #16.
Actors in ractor
are very lightweight and can be treated as thread-safe. Each actor will only call one of it's handler functions at a time, and they will
never be executed in parallel. Following the actor model leads to microservices with well-defined state and processing logic.
An example ping-pong
actor might be the following
use ractor::{Actor, ActorCell, ActorProcessingErr};
/// [PingPong] is a basic actor that will print
/// ping..pong.. repeatedly until some exit
/// condition is met (a counter hits 10). Then
/// it will exit
pub struct PingPong;
/// This is the types of message [PingPong] supports
#[derive(Debug, Clone)]
pub enum Message {
Ping,
Pong,
}
impl Message {
// retrieve the next message in the sequence
fn next(&self) -> Self {
match self {
Self::Ping => Self::Pong,
Self::Pong => Self::Ping,
}
}
// print out this message
fn print(&self) {
match self {
Self::Ping => print!("ping.."),
Self::Pong => print!("pong.."),
}
}
}
// the implementation of our actor's "logic"
#[async_trait::async_trait]
impl Actor for PingPong {
// An actor has a message type
type Msg = Message;
// and (optionally) internal state
type State = u8;
// Startup initialization args
type Arguments = ();
// Initially we need to create our state, and potentially
// start some internal processing (by posting a message for
// example)
async fn pre_start(&self, myself: ActorCell, _: Self::Arguments) -> Result<Self::State, ActorProcessingErr> {
// startup the event processing
myself.cast(Message::Ping)?;
Ok(0u8)
}
// This is our main message handler
async fn handle(
&self,
myself: ActorCell,
message: Self::Msg,
state: &mut Self::State,
) -> Result<(), ActorProcessingErr> {
if *state < 10u8 {
message.print();
myself.cast(message.next())?;
*state += 1;
} else {
myself.stop(None);
// don't send another message, rather stop the agent after 10 iterations
}
Ok(())
}
}
#[tokio::main]
async fn main() {
let (_, actor_handle) = Actor::spawn(None, PingPong).await.expect("Failed to start actor");
actor_handle.await.expect("Actor failed to exit cleanly");
}
which will output
$ cargo run
ping..pong..ping..pong..ping..pong..ping..pong..ping..pong..
$
The means of communication between actors is that they pass messages to each other. A developer can define any message type which is Send + 'static
and it
will be supported by ractor
. There are 4 concurrent message types, which are listened to in priority. They are
- Signals: Signals are the highest-priority of all and will interrupt the actor wherever processing currently is (this includes terminating async work). There
is only 1 signal today, which is
Signal::Kill
, and it immediately terminates all work. This includes message processing or supervision event processing. - Stop: There is also a pre-defined stop signal. You can give a "stop reason" if you want, but it's optional. Stop is a graceful exit, meaning currently executing async work will complete, and on the next message processing iteration Stop will take prioritity over future supervision events or regular messages. It will not terminate currently executing work, regardless of the provided reason.
- SupervisionEvent: Supervision events are messages from child actors to their supervisors in the event of their startup, death, and/or unhandled panic. Supervision events are how an actor's supervisor(s) are notified of events of their children and can handle lifetime events for them.
- Messages: Regular, user-defined, messages are the last channel of communication to actors. They are the lowest priority of the 4 message types and denote general actor work. The first 3 messages types (signals, stop, supervision) are generally quiet unless it's a lifecycle event for the actor, but this channel is the "work" channel doing what your actor wants to do!
Ractor actors can also be used to build a distributed pool of actors, similar to Erlang's EPMD which manages inter-node connections + node naming. In our implementation, we have ractor_cluster
in order to facilitate distributed ractor
actors.
ractor_cluster
has a single main type in it, namely the NodeServer
which represents a host of a node()
process. It additionally has some macros and a procedural macros to facilitate developer efficiency when building distributed actors. The NodeServer
is reponsible for
- Managing all incoming and outgoing
NodeSession
actors which represent a remote node connected to this host. - Managing the
TcpListener
which hosts the server socket to accept incoming session requests.
The bulk of the logic for node interconnections however is held in the NodeSession
which manages
- The underlying TCP connection managing reading and writing to the stream.
- The authentication between this node and the connection to the peer
- Managing actor lifecycle for actors spawned on the remote system.
- Transmitting all inter-actor messages between nodes.
- Managing PG group synchronization
etc..
The NodeSession
makes local actors available on a remote system by spawning RemoteActor
s which are essentially untyped actors that only handle serialized messages, leaving message deserialization up to the originating system. It also keeps track of pending RPC requests, to match request to response upon reply. There are special extension points in ractor
which are added to specifically support RemoteActor
s that aren't generally meant to be used outside of the standard
Actor::spawn(Some("name".to_string()), MyActor).await
pattern.
Note not all actors are created equal. Actors need to support having their message types sent over the network link. This is done by overriding specific methods of the ractor::Message
trait all messages need to support. Due to the lack of specialization support in Rust, if you choose to use ractor_cluster
you'll need to derive the ractor::Message
trait for all message types in your crate. However to support this, we have a few procedural macros to make this a more painless process
Many actors are going to be local-only and have no need sending messages over the network link. This is the most basic scenario and in this case the default ractor::Message
trait implementation is fine. You can derive it quickly with:
use ractor_cluster::RactorMessage;
use ractor::RpcReplyPort;
#[derive(RactorMessage)]
enum MyBasicMessageType {
Cast1(String, u64),
Call1(u8, i64, RpcReplyPort<Vec<String>>),
}
The will implement the default ractor::Message
trait for you without you having to write it out by hand.
If you want your actor to support remoting, then you should use a different derive statement, namely:
use ractor_cluster::RactorClusterMessage;
use ractor::RpcReplyPort;
#[derive(RactorClusterMessage)]
enum MyBasicMessageType {
Cast1(String, u64),
#[rpc]
Call1(u8, i64, RpcReplyPort<Vec<String>>),
}
which adds a significant amount of underlying boilerplate (take a look yourself with cargo expand
!) for the implementation. But the short answer is, each enum variant needs to serialize to a byte array of arguments, a variant name, and if it's an RPC give a port that receives a byte array and de-serialize the reply back. Each of the types inside of either the arguments or reply type need to implement the ractor_cluster::BytesConvertable
trait which just says this value can be written to a byte array and decoded from a byte array. If you're using prost
for your message type definitions (protobuf), we have a macro to auto-implement this for your types.
ractor_cluster::derive_serialization_for_prost_type! {MyProtobufType}
Besides that, just write your actor as you would. The actor itself will live where you define it and will be capable of receiving messages sent over the network link from other clusters!
The original authors of ractor
are Sean Lawlor (@slawlor), Dillon George (@dillonrg), and Evan Au (@afterdusk). To learn more about contributing to ractor
please see CONTRIBUTING.md
This project is licensed under MIT.