smol-rs/async-task

Http clients for async-task

Closed this issue · 5 comments

Hey I'm currently writing the async rust book for O'Reilly. I love this crate and I generally use it for teaching the concepts async runtimes. However, I usually use crates like Tokio at work because well, that's just what industry seems to go for and I have no say in rewriting the entire stack. So When I got the opportunity to write the async rust book I was excited to put this crate into chapter three. However, I've quickly realised that although writing custom queues in this crate is a dream, the HTTP crates I usually lean on just don't work as they're looking for Tokio runtimes etc. I'd love to have a complete runtime with HTTP for the book on this crate. Are there any HTTP crates that work with async-task? If not I'm up for collaborating on a HTTP crate that can work with async-task.

You can fit hyper into smol, see this example (this uses smol::spawn but it shouldn't be too hard to replace that with a real executor).

You can also use async-h1, but it imports async-std.

thanks for the response. I'm still getting a Tokio error. I appreciate I might be doing something stupid but I've created an executor like the following:

#[derive(Clone)]
struct SmolExecutor;

impl<F: Future + Send + 'static> hyper::rt::Executor<F> for SmolExecutor {
    fn execute(&self, fut: F) {
        spawn_task!(async { drop(fut.await) }).detach();
    }
}

The spawn_task! macro is a wrap around for the following function:

fn spawn_task<F, T>(future: F, order: FutureType) -> Task<T>
where
    F: Future<Output = T> + Send + 'static,
    T: Send + 'static,
{
    static HIGH_CHANNEL: Lazy<(Sender<Runnable>, Receiver<Runnable>)> = Lazy::new(|| 
        {flume::unbounded::<Runnable>()}
    );
    static LOW_CHANNEL: Lazy<(Sender<Runnable>, Receiver<Runnable>)> = Lazy::new(|| 
        {flume::unbounded::<Runnable>()}
    );
    static HIGH_QUEUE: Lazy<flume::Sender<Runnable>> = Lazy::new(|| {
. . .

Which goes onto spawn tasks and send runnables to the queue. The spawn_task! macro works fine when called directly for async functions and structs that have implemented the Future trait. However, with the following hyper code I try and make a simple get http request:

use hyper::{Client, Uri};
. . .
    let future  = async {
        let connector = hyper::client::HttpConnector::new();
        let client = Client::builder().executor(SmolExecutor).build::<_, hyper::Body>(connector);
        let url = "http://httpbin.org/get".parse::<Uri>().unwrap();
        let response = client.get(url).await.unwrap();
    
        // Print the response body
        let body = hyper::body::to_bytes(response.into_body()).await.unwrap();
        println!("Response body: {:?}", body);
    };

    let test = spawn_task!(future);
    let _outcome = future::block_on(test);

And I get the following error:

thread '<unnamed>' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime'

I'm currently using the following hyper crate:

hyper = { version = "0.14.26", features = ["http1", "http2", "client", "runtime"] }

Is there something obvious I'm missing?

You're using the HttpConnector type, which uses tokio under the hood... and since you're not using the tokio runtime, it causes that panic.

What you want to do is define your own connector type using smol primitives. See here for an example of how this may be done.

thank you for pointing me in the right direction. Everything is working now as it should. I will talk to the editors and see if I can dedicate two chapters to smol as I have to guide the readers through the code and casually lumping in all the client code at the end of chapter three would not be good for readers trying to learn about async. However, not including how to implement something like HTTP into the runtime they spent a whole chapter doing would also not be a good experience. Also sorry about not checking the examples before posting here. I will check out the examples before bothering you again. really appreciate the quick feedback

No problem! Let me know if you have any other questions, I'm happy to answer them.

Closing this issue for now since I think it's been answered to satisfaction.