/deadpool

Dead simple pool implementation for rust with async-await

Primary LanguageRustApache License 2.0Apache-2.0

Deadpool Latest Version Build Status Unsafe forbidden Rust 1.63+

Deadpool is a dead simple async pool for connections and objects of any type.

This crate provides two implementations:

  • Managed pool (deadpool::managed::Pool)

    • Creates and recycles objects as needed
    • Useful for database connection pools
    • Enabled via the managed feature in your Cargo.toml
  • Unmanaged pool (deadpool::unmanaged::Pool)

    • All objects either need to be created by the user and added to the pool manually. It is also possible to create a pool from an existing collection of objects.
    • Enabled via the unmanaged feature in your Cargo.toml

Features

Feature Description Extra dependencies Default
managed Enable managed pool implementation async-trait yes
unmanaged Enable unmanaged pool implementation - yes
rt_tokio_1 Enable support for tokio crate tokio/time no
rt_async-std_1 Enable support for async-std crate async-std no
serde Enable support for deserializing pool config serde/derive no

The runtime features (rt_*) are only needed if you need support for timeouts. If you try to use timeouts without specifying a runtime at pool creation the pool get methods will return an PoolError::NoRuntimeSpecified error.

Managed pool (aka. connection pool)

This is the obvious choice for connection pools of any kind. Deadpool already comes with a couple of database connection pools which work out of the box.

Example

use async_trait::async_trait;
use deadpool::managed;

#[derive(Debug)]
enum Error { Fail }

struct Computer {}

impl Computer {
    async fn get_answer(&self) -> i32 {
        42
    }
}

struct Manager {}

#[async_trait]
impl managed::Manager for Manager {
    type Type = Computer;
    type Error = Error;
    
    async fn create(&self) -> Result<Computer, Error> {
        Ok(Computer {})
    }
    
    async fn recycle(&self, _: &mut Computer, _: &managed::Metrics) -> managed::RecycleResult<Error> {
        Ok(())
    }
}

type Pool = managed::Pool<Manager>;

#[tokio::main]
async fn main() {
    let mgr = Manager {};
    let pool = Pool::builder(mgr).build().unwrap();
    let mut conn = pool.get().await.unwrap();
    let answer = conn.get_answer().await;
    assert_eq!(answer, 42);
}

Database connection pools

Deadpool supports various database backends by implementing the deadpool::managed::Manager trait. The following backends are currently supported:

Backend Crate Latest Version
bolt-client deadpool-bolt Latest Version
tokio-postgres deadpool-postgres Latest Version
lapin (AMQP) deadpool-lapin Latest Version
redis deadpool-redis Latest Version
async-memcached deadpool-memcached Latest Version
rusqlite deadpool-sqlite Latest Version
diesel deadpool-diesel Latest Version
tiberius deadpool-tiberius Latest Version
r2d2 deadpool-r2d2 Latest Version
rbatis rbatis Latest Version

Reasons for yet another connection pool

Deadpool is by no means the only pool implementation available. It does things a little different and that is the main reason for it to exist:

  • Deadpool is compatible with any executor. Objects are returned to the pool using the Drop trait. The health of those objects is checked upon next retrieval and not when they are returned. Deadpool never performs any actions in the background. This is the reason why deadpool does not need to spawn futures and does not rely on a background thread or task of any type.

  • Identical startup and runtime behaviour. When writing long running application there usually should be no difference between startup and runtime if a database connection is temporarily not available. Nobody would expect an application to crash if the database becomes unavailable at runtime. So it should not crash on startup either. Creating the pool never fails and errors are only ever returned when calling Pool::get().

    If you really want your application to crash on startup if objects can not be created on startup simply call pool.get().await.expect("DB connection failed") right after creating the pool.

  • Deadpool is fast. Whenever working with locking primitives they are held for the shortest duration possible. When returning an object to the pool a single mutex is locked and when retrieving objects from the pool a Semaphore is used to make this Mutex as little contested as possible.

  • Deadpool is simple. Dead simple. There is very little API surface. The actual code is barely 100 lines of code and lives in the two functions Pool::get and Object::drop.

  • Deadpool is extensible. By using post_create, pre_recycle and post_recycle hooks you can customize object creation and recycling to fit your needs.

  • Deadpool provides insights. All objects track Metrics and the pool provides a status method that can be used to find out details about the inner workings.

  • Deadpool is resizable. You can grow and shrink the pool at runtime without requiring an application restart.

Unmanaged pool

An unmanaged pool is useful when you can't write a manager for the objects you want to pool or simply don't want to. This pool implementation is slightly faster than the managed pool because it does not use a Manager trait to create and recycle objects but leaves it up to the user.

Unmanaged pool example

use deadpool::unmanaged::Pool;

struct Computer {}

impl Computer {
    async fn get_answer(&self) -> i32 {
        42
    }
}

#[tokio::main]
async fn main() {
    let pool = Pool::from(vec![
        Computer {},
        Computer {},
    ]);
    let s = pool.get().await.unwrap();
    assert_eq!(s.get_answer().await, 42);
}

FAQ

Why does deadpool depend on tokio? I thought it was runtime agnostic...

Deadpool depends on tokio::sync::Semaphore. This does not mean that the tokio runtime or anything else of tokio is being used or will be part of your build. You can easily check this by running the following command in your own code base:

cargo tree --format "{p} {f}"

License

Licensed under either of

at your option.