connection closed because of a broken pipe
hohowt opened this issue · 12 comments
看起来是由于 Server 端 context cancel 导致的连接被关闭,导致的 client 报错 broken pipe?
Server 为什么会出现 context cancel?
Server 为什么会出现 context cancel?
我看了下,server有做keepalive的设置,默认2分钟。同时我用grpcurl调用接口是正常返回的
如果是用的 gRPC 的话,试试看 volo-grpc 0.8 行不行呢?应该直接在 Cargo.toml 里面指定一下版本就行
如果是用的 gRPC 的话,试试看 volo-grpc 0.8 行不行呢?应该直接在 Cargo.toml 里面指定一下版本就行
volo-grpc换成0.8,就无法正常编译。我再尝试用volo实现相同功能的server调用试试
如果是用的 gRPC 的话,试试看 volo-grpc 0.8 行不行呢?应该直接在 Cargo.toml 里面指定一下版本就行
我尝试了volo作为server是可以正常访问的,但grpcurl可以正常访问就说不通,我尝试debug了下,在server端获取到以下的错误
received an illegal stream id: 1. headers frame: [FrameHeader HEADERS flags=END_STREAM|END_HEADERS stream=1 len=12]
如果是用的 gRPC 的话,试试看 volo-grpc 0.8 行不行呢?应该直接在 Cargo.toml 里面指定一下版本就行
我尝试了volo作为server是可以正常访问的,但grpcurl可以正常访问就说不通,我尝试debug了下,在server端获取到以下的错误 received an illegal stream id: 1. headers frame: [FrameHeader HEADERS flags=END_STREAM|END_HEADERS stream=1 len=12]
volo和grpcurl发起请求时都会产生3个frame,但是grpcurl的每个frame的streamid都会递增+2(例如 1,3,5),volo则没有递增(例如1,1,1),因此导致报错
@5waker 太赞了!你有兴趣提个 PR 修复一下这个问题嘛?
@5waker 太赞了!你有兴趣提个 PR 修复一下这个问题嘛?
我尝试了一些方案,如升级hyper,修改set_nodelay(false) ,这里会缓冲数据发送两次而不是三次,在调试状态下偶尔能拿到数据(并不算正向修复)。均未获得成功。stream id 始终为1,并不增长。
2024-01-21T10:58:27.827937Z DEBUG h2::client: binding client connection
2024-01-21T10:58:27.828017Z DEBUG h2::client: client connection bound
2024-01-21T10:58:27.828076Z DEBUG h2::codec::framed_write: send frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 65535, max_frame_size: 16384 }
2024-01-21T10:58:27.828605Z DEBUG Connection{peer=Client}: h2::codec::framed_read: received frame=Settings { flags: (0x0), max_frame_size: 16384 }
2024-01-21T10:58:27.828685Z DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Settings { flags: (0x1: ACK) }
2024-01-21T10:58:27.828778Z DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
2024-01-21T10:58:27.828925Z DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Data { stream_id: StreamId(1) }
2024-01-21T10:58:27.828979Z DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
2024-01-21T10:58:27.829477Z DEBUG Connection{peer=Client}: h2::codec::framed_read: received frame=Settings { flags: (0x1: ACK) }
2024-01-21T10:58:27.829540Z DEBUG Connection{peer=Client}: h2::proto::settings: received settings ACK; applying Settings { flags: (0x0), enable_push: 0, initial_window_size: 65535, max_frame_size: 16384 }
2024-01-21T10:58:27.829586Z DEBUG Connection{peer=Client}: h2::codec::framed_read: received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 8 }
2024-01-21T10:58:27.829656Z DEBUG Connection{peer=Client}: h2::codec::framed_read: received frame=Ping { ack: false, payload: [2, 4, 16, 16, 9, 14, 7, 7] }
2024-01-21T10:58:27.829707Z DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Ping { ack: true, payload: [2, 4, 16, 16, 9, 14, 7, 7] }
2024-01-21T10:58:27.830017Z ERROR client: Status { code: Unknown, message: "connection error", source: Some(hyper::Error(Io, Custom { kind: BrokenPipe, error: "connection closed because of a broken pipe" })) }
@PureWhiteWu 我想咨询下之前版本中volo有调用过标准的go grpc服务嘛? 因为我看到这个校验在grpc中加的很早 grpc/grpc#957 如果之前能成功调用说明是引入bug,这样能通过查询commit来缩小排查范围。