jbaldwin/libcoro

Integration of `libcoro` with `boost::asio` Event Loop

oleksandrkozlov opened this issue · 6 comments

Hello,

I'm attempting to integrate libcoro into a single-threaded application that utilizes boost::asio for its event loop. Below is my current approach:

class my_coro_asio_io_scheduler {
public:
  explicit my_coro_asio_io_scheduler(boost::asio::io_context& ctx) : timer{ctx} {
    process_events();
  }

  auto schedule(coro::task<void>&& task) -> void {
    io_scheduler.schedule(std::move(task));
  }

private:
  auto process_events() -> void {
    timer.expires_from_now(std::chrono::milliseconds{50});
    timer.async_wait([this](const auto& ec) {
      if (ec) {
        return;
      }
      io_scheduler.process_events();
      process_events();
    });
  }

  coro::io_scheduler io_scheduler{
      {.thread_strategy = coro::io_scheduler::thread_strategy_t::manual,
       .execution_strategy = coro::io_scheduler::execution_strategy_t::process_tasks_inline}};
  boost::asio::steady_timer timer;
};

auto my_func() -> coro::task<void> { co_return; }

auto main() -> int {
  auto io_context = boost::asio::io_context{};
  auto my_scheduler = my_coro_asio_io_scheduler{io_context};
  io_context.post([&] { my_scheduler.schedule(my_func()); });
  io_context.run();
}

I have a few queries based on the above implementation:

  1. Event Processing Interval: Is there a suggested timeout for processing events? Does it vary based on the specific application? Is an interval of 50ms considered reasonable?

  2. Event Availability: Is it feasible to process events as and when they're available, rather than at fixed intervals (like 50ms in my case)?

  3. Scheduling and Processing: Is it necessary to use a timer (e.g., boost::asio::steady_timer)? Couldn't we directly process events post-scheduling as shown below?

    auto my_coro_asio_io_scheduler::schedule(coro::task<void>&& task) -> void {
      io_scheduler.schedule(std::move(task));
      while (io_scheduler.process_events() > 0U) { }
    }

While the examples in the README were insightful, they didn't cater to this specific use case. I'd appreciate any guidance or feedback on the approach.

Thank you.

Hey, good questions. I'll try my best to answer but I think it probably comes down to your use case.

  1. The smaller the time you use between checking for events the more CPU "churn" you'll have checking for those events, but it will lower the latency you'll have. It really just depends on your needs and what trade off you want for your application.

Some use cases I had envisioned for this method:

1.a) Using process events manually in a single threaded app that you cannot give full control to io_scheduler, you'll need to decide your polling interval that is acceptable. This sounds like possibly your use case.

1.b) An external event that cannot for some reason be integrated into a task that io_scheduler can itself action on, the user would wait for that event and then trigger or process events manually from that external trigger. In this scenario there is no poll/wait/delay, it would be an immediate turn around from external event to io_scheduler.

  1. If you can use the dedicated io_scheduler thread then yes this is feasible and built right into libcoro, otherwise you need a 1.b architecture to guarantee this.

  2. It depends, I think this loop could CPU churn if a network coroutine isn't ready for a long time, e.g. waiting for a response. You'll keep calling into the function and nothing will happen until that response is available thus possibly stalling other code in your application.


I think my question for your app is what in boost asio is an event that you then want to run coroutines in libcoro? Can you use those events to trigger tasks instead of a 50ms timer? E.g. use case 1.b.

A larger architecture question would be if you could give full control to io_scheduler and wrap the boost asio events in libcoro coroutine tasks? But I realize a large existing app that might be extremely difficult to do.

What in boost asio is an event that you then want to run coroutines in libcoro?

Actually, any async function that accepts a callback. For example, basic_stream_socket::async_read_some() [1]. The idea is to replace callbacks with co_awaits in the app (caller code) by converting boost async functions into coroutines.

Before/Now:

// My wrapper around `async_read_some()` that still uses a callback.
using my_callback = std::function<void(boost::error_code error, size_t bytes_transferred)>;
auto my_async_read_some(boost::socket& socket, boost::buffer buffer, my_callback callback) -> void {
  socket.async_read_some(buffer, callback);
} 

// My app
auto main() -> int {
  auto io_context = boost::asio::io_context{};
  // ...
  io_context.post([&] {
    my_async_read_some(socket, buffer, [&](auto, size_t bytes_transferred) {
      // ...
      io_context.stop();
    });
  });
  io_context.run();
}

Goal:

// My wrapper around `async_read_some()` that now uses a coroutine.
auto my_async_read_some(boost::socket& socket, boost::buffer buffer) -> coro::task<size_t> {
  // ...
  auto event = coro::event{};
  socket.async_read_some(socket, buffer, [&](/* ... */){
    // ...
    event.set();
  });
  co_await event;
  // ...
} 

// My app
auto main() -> int {
  auto io_context = boost::asio::io_context{};
  // ...
  io_context.post([&] {
    my_scheduler.schedule([]() -> coro::task<void> {
      size_t bytes_transferred = co_await my_async_read_some(socker, buffer);
      // ...
      io_context.stop();
    });
  });
  io_context.run();
}

After event.set() can you call process events directly? Can you hand control to libcoro at that point in time?

I am not sure I understand. What does "process events directly" or "hand control to libcoro" exactly mean?

Try something like this:

// My wrapper around `async_read_some()` that now uses a coroutine.
auto my_async_read_some(boost::socket& socket, boost::buffer buffer) -> coro::task<size_t> {
  // ...
  auto event = coro::event{};
  socket.async_read_some(socket, buffer, [&](/* ... */){
    // ...
    event.set(); // This sets the event to resume the awaiting coroutine but since you are manually driving the event loop
    io_scheduler.process_events(); // call process events for it to pickup that the event was `set()`
  });
  co_await event; // This should resume execution _here_
  // ...
} 

// My app
auto main() -> int {
  auto io_context = boost::asio::io_context{};
  // ...
  io_context.post([&] {
    my_scheduler.schedule([]() -> coro::task<void> {
      size_t bytes_transferred = co_await my_async_read_some(socker, buffer);
      // ...
      io_context.stop();
    });
  });
  io_context.run();
}

I'm going to close this out since I haven't heard back from you, feel free to re-open it if you are still going down this path and need help.