Timeout of push operation causes panic
Closed this issue · 4 comments
max-wittig commented
- Latest git-mirror
- Source that causes it: https://gitlab.com/gitlab-org/gitlab.git
thread '<unnamed>' panicked at 'failed printing to stdout: Resource temporarily unavailable (os error 11)', library/std/src/io/stdio.rs:935:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
END(FAIL) 0/235 [2022-01-28 09:11:18.965363019 +00:00]: https://gitlab.com/gitlab-org/gitlab.git -> git@code.siemens.com:mirror/gitlab.git (Command "git" "push" "git@code.siemens.com:mirror/gitlab.git" "+refs/tags/*:refs/tags/*" "+refs/heads/*:refs/heads/*" failed with exit code: 1, Stderr: remote: GitLab: Push operation timed out
remote:
remote: Timing information for debugging purposes:
remote: Running checks for ref: 0a68660b-mr-iid
remote: Running checks for ref: 0a68660b-no-mr-iid
remote: Running checks for ref: 0a68660b-no-mr-line
remote: Running checks for ref: 10-0-stable-ee
remote: Running checks for ref: 10-0-stable-ee-with-ce-2017-09-18
remote: Running checks for ref: 10-1-stable-ee
remote: Running checks for ref: 10-2-stable-ee
remote: Running checks for ref: 10-2-stable-ee-with-ce-2017-11-30
remote: Running checks for ref: 10-3-stable-ee
remote: Running checks for ref: 10-3-stable-ee-with-ce-2017-12-08
remote: Running checks for ref: 10-4-stable-ee
remote: Running checks for ref: 10-5-stable-ee
remote: Running checks for ref: 10-5-stable-ee-with-ce-2018-02-09
bachp commented
Thanks for reporting.
Looking at the error I don't think the panic was caused by the timeout.
The issue seems to be with stdout. os error 11
indicates that stdout returned EAGAIN
, which indicates that it is set to non blocking mode.
Not sure how to further investigate this.
@max-wittig Does this happen frequently in your setup?
max-wittig commented
Happens everytime with gitlab
! f570cc92e305...d55104ec2f55 refs/pull/127364/merge -> refs/pull/127364/merge (unable to update locathread '<unnamed>' panicked at 'failed printing to stdout: Resource temporarily unavailable (os error 11)', library/std/src/io/stdio.rs:935:9
stack backtrace:
0: rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
1: std::panicking::begin_panic_fmt
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:435:5
2: std::io::stdio::print_to
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/io/stdio.rs:935:9
3: std::io::stdio::_print
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/io/stdio.rs:947:5
4: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &F>::call_mut
5: rayon::iter::plumbing::bridge_producer_consumer::helper
6: rayon_core::join::join_context::{{closure}}
7: rayon_core::registry::in_worker
8: rayon::iter::plumbing::bridge_producer_consumer::helper
9: rayon_core::join::join_context::{{closure}}
10: rayon_core::registry::in_worker
11: rayon::iter::plumbing::bridge_producer_consumer::helper
12: rayon_core::join::join_context::{{closure}}
13: rayon_core::registry::in_worker
14: rayon::iter::plumbing::bridge_producer_consumer::helper
15: rayon_core::join::join_context::{{closure}}
16: rayon_core::registry::in_worker
17: rayon::iter::plumbing::bridge_producer_consumer::helper
18: <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute
19: rayon_core::registry::WorkerThread::wait_until_cold
20: rayon_core::join::join_context::{{closure}}
21: rayon_core::registry::in_worker
22: rayon::iter::plumbing::bridge_producer_consumer::helper
23: <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute
24: rayon_core::registry::WorkerThread::wait_until_cold
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
I can provide a RUST_BACKTRACE=full
soon, if needed.
bufferoverflow commented
I've not seen this anymore and suggest to close this issue.
max-wittig commented
@bufferoverflow Pretty sure this still happens. We "fixed it" by manually pushing Gitlab, but I guess we can close it for the time being.