tower-rs/tower-grpc

can't send 2 unary requests in one client concurrently

hicqu opened this issue · 11 comments

hicqu commented

Hi, I'm learning tower-grpc's public API. I begin with tower-grpc-example/helloworld, and my client.rs is attached. When I run my client, both server side and client side report errors:
at the server side:

[2019-05-06T10:39:41Z ERROR server] h2 error: Protocol(Error(Io, Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" }))

at the client side:

thread 'main' panicked at 'gRPC request failed; err=Status { code: Unknown, message: "operation was canceled: connection was not ready" }', src/client.rs:65:26

Could you help me?
client.txt

Looks like the background task never got spawned somehow or got shutdown early. @seanmonstar may know more but maybe this has to do with the new with_executor fn in tower-hyper?

The "connection was not ready" message is coming from the SendRequest, trying to send a request without first checking poll_ready. You can call Greeter::poll_ready before sending a request.

Also, I'd note that instead of using tokio_threadpool directly, it'd be better to use a runtime from tokio (either single-threaded or threadpool version is fine), since the threadpool by itself won't have the other parts of the runtime setup (a reactor, a timer, etc). Lastly, the Future::wait method also doesn't combine well. Consider using runtime.block_on(future) instead.

hicqu commented

@seanmonstar thanks for your answer. I have tried such code in my client:

    // omit same code to create the client...
    let name = "Client".to_owned();
    runtime.executor().spawn(
        client
            .say_hello(Request::new(HelloRequest { name: name.clone() }))
            .map_err(|e| panic!("gRPC request failed; err={:?}", e))
            .map(|resp| println!("RESPONSE = {:?}", resp)),
    );
    runtime.executor().spawn(
        client
            .say_hello(Request::new(HelloRequest { name: name.clone() }))
            .map_err(|e| panic!("gRPC request failed; err={:?}", e))
            .map(|resp| println!("RESPONSE = {:?}", resp)),
    );
    thread::sleep(time::Duration::from_secs(100));

but got same error at the client side:

thread 'runtime1' panicked at 'gRPC request failed; err=Status { code: Unknown, message: "operation was canceled: connection was not ready" }', src/client.rs:69:26
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
RESPONSE = Response { metadata: MetadataMap { headers: {"content-type": "application/grpc+proto", "date": "Tue, 07 May 2019 06:41:07 GMT"} }, message: HelloReply { message:
 "hello, Client" } }

So I guess the key is I don't call client.poll_ready correctly. However if I just spawn one say_hello, it can run correctly. Could you tell me where will poll_ready be called internally?

Maybe you are confused that why I'm trying to avoid tokio::run or Runtime::block_on. That's because what I want is calling multi say_hello simultaneously, but these 2 functions need to take the ownership of client. So maybe we need to mark the client as Sync?

@hicqu you can use tower-buffer to get a cloneable version of the client and then use the ServiceExt in tower to get a client.ready().say_hello(..) that will poll_ready until the client is ready to accept the next rpc.

There is now an easier way to get a future for when a client is ready, and the examples have been updated to use it in #172.

hicqu commented

Thank you @LucioFranco @seanmonstar ready is a nice interface to make the client more clear and now I believe I can wait 2 HelloResponses simultaneously with tower-buffer.
However I have a last question about ready: if we want to send 2 requests with 1 client, I guess the client's status will be not ready -> ready -> not ready -> ready -> not ready -> ready?

  • First time it becomes ready is because we call ready and the returned future is resolved.
  • The next time it becomes ready is because the first request is finished.
  • The last time it becomes ready is because the second request is finished.

Am I right? If so, seems for every connection there will be 1 http2 stream simultaneously at most, because every gRPC call is in one http2 stream, and there are 1 gRPC call simultaneously at most (as the client becomes not ready when waiting a response). I guess it can't full utilize one connection.

The connection will be ready to send a new request even while the first one is still in-flight, as long as you haven't reached the max concurrent streams setting sent by the server.

The examples could be changed to call ready, and send a request and tokio::spawn it, and wait on ready again. It should be ready almost immediately, in most cases.

hicqu commented

Thanks for your explain! Yes it's my mistake about ready. However client.ready will move itself, how to wait it again? And seems it can't be cloned either.

hicqu commented

Oh, I got it. Seems I can send many requests simultaneously without tower-buffer. Thank you!

agend commented

@hicqu How did you overcome lack of Clone (it was in tower_h2) for tower_hyper::Connection in concurrent scenario?

@hicqu I got the same ConnectionReset error on Rust server side, however this error only occured when using the grpc-go helloworld-client to perform request, see my issue #187