Allow pod-utils / clonerefs to use default remote branch
Opened this issue ยท 22 comments
My read of current pod-utils code is that it assumes base_ref
to always be present / non-empty (ref: #20665 (comment))
It looks like there are 435 jobs that explicitly reference master
, when what they likely mean is "just use whatever the default remote branch is"
$ ag --no-filename "base_ref:.*master" config/jobs | grep base_ref | wc -l
435
Ideally base_ref would be auto-populated with the remote default branch if omitted. Maybe this already happens? This would make repo default branch renames less toilsome.
Opening this to track verifying or fixing, so not yet a bug or feature.
/area prow/pod-utilities
/assign @alvaroaleman @cjwagner
in case you can answer more quickly than I can, feel free to /unassign if you can't handle
/assign @spiffxp
to verify what happens today
/sig testing
My read of current pod-utils code is that it assumes base_ref to always be present / non-empty (ref: #20665 (comment))
Yeah, that is correct (for periodics, for presubmits/postubmits we get the info from github)
Ideally base_ref would be auto-populated with the remote default branch if omitted. Maybe this already happens? This would make repo default branch renames less toilsome.
I am pretty sure this does not happen. Adding this would be pretty easy from a complexity POV, as the default_branch
is part of the repo. It will drive up token usage a bit though (and I think it won't cache that well, as the repo contains a lot of info that will regularly change, like pushed_at: timestamp
, size, the number of stars, forks etc)
If we're concerned about token usage, we could try doing this directly in clone by using the contents of .git/refs/remotes/origin/HEAD
if base_ref
is empty. I dunno if that makes the API feel too weird though (technically it does omitempty
as-is)
/area jobs
since this would be a job config change too
/wg naming
since this is in support of kubernetes/org#2222
Relevant discussion thread in #prow https://kubernetes.slack.com/archives/CDECRSC5U/p1615236463055600
Discussed at sig-testing meeting today:
- Goal is to add support for this to
extra_refs
and not the spec for triggering presubmits/postsubmits- so, the extra_refs used by periodics
- and also, any extra_refs used by presubmits/postsubmits that clone multiple repos
- We might be able to use
HEAD
as part of the extra_refs spec today - If not, I'll try prototyping something, if it gets too complex, I'll pull back to a proposal for review
Since the HEAD
symbolic ref is not a default git
thing and instead a real symbolic ref that e.g. GitHub, Gerrit, GitLab provide, it should be possible to just use extra_refs[*].base_ref: HEAD
.
FWIW, I feel like remotes having a HEAD
is a default git
thing, but maybe I'm wrong. I'll give the base_ref: HEAD
thing a shot this week
refs:
not quite workable, because we try to check it out to a branch with that name, and it's not a valid branch name #21452 (comment)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I think we could add something simple to replace HEAD with a valid branch name and should still consider this.
/remove-lifecycle stale
/milestone v1.23
/kind feature
$ ag --no-filename "base_ref:.*master" config/jobs | grep base_ref | wc -l
1141
The number of jobs affected has grown considerably from the 435 that existed at the beginning of this year
/milestone v1.24
/priority important-longterm
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@cjwagner @stevekuznetsov @chaodaiG WDYT #20667 (comment) ?
I think we could do this pretty easily by just detecting HEAD
=> pick a different value for the branch name but otherwise use HEAD in most places untouched. Last time I tried this the branch name seemed to be the only stumbling block. I'm not sure what we should call it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This would now be at https://sigs.k8s.io/prow
I think so far we've just wound up updating the configs as necessary. I do think this would be nice to have, but there clearly haven't been the resources and this is no longer the place to track prow features.