Allowing marking fields that should skip generating 'default' values in schemas
munnerz opened this issue · 6 comments
When generating schemas that reference external types, sometimes it is helpful to be able to avoid setting 'default' markers for fields within the type's schema.
As noted in kubernetes/kubernetes#95587 - it is recommended that defaulting should be skipped for templates of other resources (instead allowing the apiserver to set these defaults with this template is actually submitted to the apiserver).
Without a way to avoid generating these defaults, any controller that relies on hashing of say, a PodTemplate to determine whether an update is needed to the Pod will not be able to know if the object has actually changed and needs an update, or if a new field is introduced.
This is especially problematic when you consider stateful systems, as a change to a CRD that introduces a new default value will trigger a re-creation of all underlying pods.
Many alternatives are discussed in kubernetes/community#6764 to solve this, including having controllers only hash the fields they are interested in. This is problematic because over time as new fields are added, controllers may 'miss' these and therefore not trigger updates when the value is actually changed either.
Proposal
Adding a new marker that can be set on fields, e.g. //+ kubebuilder:default:skip=true
which will cause default
values to not be set in the generated sub-schema.
My only concern when implementing this so far, is currently markers do not recurse into sub-schemas. Is this intentional, and are there issues in doing this sort of recursive lookup?
An example of a problematic declarative default in a core type: https://github.com/kubernetes/api/blob/5147c1a32f6a0b9b155bb84e59f933e0ff8a3792/core/v1/types.go#L2144
Sounds reasonable to me. No idea, about the recursive lookup.
Not really familiar with the current implementation but maybe some sort of "post-processing" to remove previously added defaults recursively would work? (not sure if that's different to what you suggested)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.