kubernetes/git-sync

git-sync v4 fetches the HEAD of the remote default branch instead of `master` if --ref is not specified

nan-yu opened this issue · 7 comments

nan-yu commented

This is a breaking change in git-sync v4.

In git-sync v3, if neither --branch or --rev is specified, it fetches the HEAD commit on the default master branch.

In git-sync v4, if --ref is not specified, if fetches the HEAD commit on the remote default branch, which may not be master.
Upgrading from v3 to v4 will cause different sets of contents being selected, which will lead to cascading issues (e.g. unexpected resource deletion in Config Sync).

Fun. The rationale for changing this was that most new fit repos are switching from "master" to "main" as the default branch.

The reality is that branch and rev are redundant - it came from a lack of understanding of how git works.

I will have to think hard about this. If you specify neither flag, is the default branch and unreasonable default?

nan-yu commented

I think using the remote default branch makes sense if neither flag is specified, but it behaves differently from v3.
v3 forces a default which may not be the same as remote default.

Unlike #840, this change won't return any error, but cause a different set of contents being selected silently.

So, on one hand this is simpler and it is a major version update, so people SHOULD beware.

On the other hand, the failure mode could be nasty.

Is it the case that people will blindly update from v3 to v4 in prod, without qualifying it? Is there no way to force them to pay attention?

In hindsight, maybe having a default for --ref is a mistake, and we should force people to say "master" or "HEAD", but that would be ANOTHER breaking change, meaning v5.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale