LeaderElector's 'stopLeadingHook' runs when leadership was not acquired
Closed this issue · 5 comments
Describe the bug
The doc string for the LeaderElector's run()
method state:
stopLeadingHook called when a LeaderElector client stops leading
However, stopLeadingHook
is called both when the client stops leading and when an exception causes the client to exit the acquire loop. In our particular case, the thread executing the acquire loop experienced an InterruptedException
which caused this. I think either the docs should be updated to reflect that the hook is called in such cases or the code should be updated to only run the hook when leadership was actually acquired.
From the caller's perspective, I think the latter probably makes more sense. I'd be happy to submit a PR.
Client Version
Present on current tip of master, a86df3a753af9c616f40c18b37ac6ef86921ece9
Kubernetes Version
1.26
Java Version
Java 17
To Reproduce
Steps to reproduce the behavior:
- Pass any
stopLeadingHook
to the LeaderElector'srun()
method. - Cause the acquire loop to throw an exception.
- Observe the stop leading hook being executed.
Expected behavior
Based on the doc string of the method, I expected the 'stopLeadingHook' to only be executed in cases where the client had actually acquired leadership and its leadership ended.
Server (please complete the following information):
- OS: Linux
- Environment: EC2 VM
- Cloud: AWS
Your proposed change makes sense to me. Happy to have a PR (and a unit test) for this fix.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.