Can I use asio-grpc inside an existing boost::asio application?
Closed this issue · 7 comments
I have an existing application started with
boost::asio::io_context io_ctx;
boost::asio::co_spawn(io_ctx, asioMain(), boost::asio::detached);
io_ctx.run();
Somewhere deep inside asioMain
, in some coroutine, I want to co_await a grpc request:
boost::asio::awaitable<void> TritonClient::someCoroutine() {
...
grpc::Status status = co_await RPC::request(?, stub, client_context, request, response, asio::use_awaitable);
...
}
Is that even possible? Do I still need to create a agrpc::GrpcContext
? Do I run it? I don't know where to start.
I tried to outline your options in the documentation: https://tradias.github.io/asio-grpc/md_doc_using_asio_io_context.html
I recommend going for the Explicitly constructed io_context and Run on separate threads approach.
If I run it on a separate thread, all my code will still run on the same thread (io_context
's thread). Is that correct? And only some internal agrpc will run on the other thread, right?
Thank you for your help.
For co_spawn
+asio::use_awaitable
that is correct. Technically it depends on the completion handler executor, some details here: #88 (comment)
In your case you can also use the slightly more performant GrpcContext.run_completion_queue function.
@Tradias
Is agrpc::ClientRPC::request
thread-safe?
I have a agrpc::GrpcContext
and the only work I do on it is agrpc::ClientRPC::request
. If I call agrpc::GrpcContext::run
from multiple threads, that means the request
calls can get executed in parallel, right? Is it safe, assuming I call request
with the same grpc_context
and stub
multiple times (but different ClientContext
, request
, and response
)?
GrpcContext::run may only be called from one thread at a time. For a multi-threaded client you want to create multiple GrpcContexts and then pick one (using some strategy) whenever you do a ClientRPC::request
. See also this example: https://github.com/Tradias/asio-grpc/blob/master/example/multi-threaded-client.cpp
I was thinking of introducing a agrpc::GrpcContextPool
class that encapsulate the round robing strategy shown in the example to make it possible to write something like:
grpc::ServerBuilder server_builder;
int num_threads = 5;
agrpc::GrpcContextPool pool{server_builder, num_threads};
ClientRPC::request(grpc_context_pool, ...);
pool.run();
What do you think?
Choosing to limit a GrpcContext (aka. grpc::CompletionQueue
) to one thread was done due to the performance recommendation here https://grpc.io/docs/guides/performance/
If having to use the async completion-queue API, the best scalability trade-off is having numcpu’s threads. The ideal number of completion queues in relation to the number of threads can change over time (as gRPC C++ evolves), but as of gRPC 1.41 (Sept 2021), using 2 threads per completion queue seems to give the best performance.
I expected it to be like io_context
, but that would be nice too.
So if I use a different GrpcContext
but the same stub
, would that be thread safe?
Yes, reusing the stub
for multiple GrpcContexts is safe and preferred.
Allowing GrpcContext::run to be called from multiple threads is a significant effort and runtime overhead. Also for asio::io_context
. Every user has to pay that cost even if they use the more efficient approach of having one context per thread.