The ETCD service dies
Ccaomei opened this issue · 4 comments
The standalone ETCD service is normally deployed, and then hangs up. The specific printing information is as follows:
WARNING: 2020/07/02 10:35:54 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 10:42:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-07-02 10:46:49.268990 I | etcdserver: start to snapshot (applied: 200002, lastsnap: 100001)
2020-07-02 10:46:49.271693 I | etcdserver: saved snapshot at index 200002
2020-07-02 10:46:49.271860 I | etcdserver: compacted raft log at 195002
WARNING: 2020/07/02 10:51:50 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 10:51:50 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 10:51:50 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 10:51:50 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 10:54:34 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 10:54:34 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 11:01:08 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 11:01:13 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-07-02 11:02:13.034378 W | etcdserver: read-only range request "key:"/tong/service/146/hc" " with result "range_response_count:1 size:200" took too long (103.647273ms) to execute
2020-07-02 11:02:13.034749 W | etcdserver: read-only range request "key:"/tong/service/146/hc" " with result "range_response_count:1 size:200" took too long (103.979466ms) to execute
WARNING: 2020/07/02 11:05:52 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/07/02 11:05:52 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-07-02 11:21:37.753830 I | etcdserver: start to snapshot (applied: 300003, lastsnap: 200002)
2020-07-02 11:21:37.771712 I | etcdserver: saved snapshot at index 300003
2020-07-02 11:21:37.772132 I | etcdserver: compacted raft log at 295003
2020-07-02 11:22:21.984690 N | pkg/osutil: received terminated signal, shutting down...
2020-07-02 11:22:21.985625 I | etcdserver: skipped leadership transfer for single voting member cluster
WARNING: 2020/07/02 11:22:21 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.