Tradias/asio-grpc

Questions about CPU usage and event multiplexing

Closed this issue · 3 comments

Hi, we are considering using this library in a project and would like to clarify some things first

  1. Our first choice would be to use a single thread as shown in this example, but I have concers about the CPU usage: this strategy seems to be based on polling both the asio event loop ( io_context ) and the grpc completion queue with with a zero timeout to prevent the call to AsyncNext from blocking. If that is the case, the loop is likely to consume 100% of one CPU core, which might be prohibitive to our customers. On the other hand if it uses a small timeout to avoid too high CPU usage, it might add latency since both event loops cannot run at the same time. Is this analysis correct?

  2. Can I perform multiple concurrent calls to RPC::request (as shown here) on the same context or should I wait the completion of last one to perform a new request?

Thanks.

Hi, thanks for considering this library :).

  1. Correct, without timeout this would consume 100% CPU. In agrpc::run I therefore use a backoff policy going from the default of 250ms down to 0ms depending on most recent activity on either context. If a context has just processed them work in their poll call -> reduce timeout to 0ms, if not -> gradually increase back to 250ms. I am personally not a big fan of this API either, but users seem to like the convenience of running io_context and GrpcContext on the same thread (and thereby avoiding any need for synchronization). I feel that it needs more real-world testing though. I have some performance numbers for unary rpcs in the [README].(https://github.com/Tradias/asio-grpc#performance) (look for cpp_asio_grpc_io_context_coro).
  2. Yes you can, otherwise it wouldn't be an asynchronous API, right? 😁

Thanks

Yes you can, otherwise it wouldn't be an asynchronous API, right?

In some low-level libraries like Boost.Beast the user is responsible for synchronizing reads and writes.

In some low-level libraries like Boost.Beast the user is responsible for synchronizing reads and writes.

For a single streaming RPC you can only have one outstanding read/write at a time. But of course, you can have multiple clients, each doing their RPC and won't need to sync reads/writes between all clients.