Better support for Kustomize
devantler opened this issue · 7 comments
What would you like to be added?
I am unable to make it work with Kustomize when running it against my running cluster. I would expect the validation to use the CRD from the cluster, but it seems there are no OpenAPI spec for Kustomize yet.
Furthermore, it would be neat with some way to default to whitelisting/ignoring patches (maybe be checking the file name), such that patches do not fail because they do not follow the full spec. For example, this patch that adds a GHCR secret to the Flux HelmRelease, and allows it to use the host network:
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
name: gha-runner-scale-set
namespace: gha-runner-scale-set
spec:
values:
template:
spec:
imagePullSecrets:
- name: ghcr-auth
hostNetwork: true
Why is this needed?
I would expect Kustomize to work out-of-the-box, or with little configuration, as it is quite commonly used in GitOps to deploy components, and to patch some manifest.
kustomize
includes a command to render the YAMLs. Workflow would be this:
kustomize build <input_folder> -o <rendered_output>
kubectl-validate <rendered_output>
Given it is a single step in CI workflow Im not sure it makes sense to introduce a dependency for this
Note kustomize renders into a single output file, so you may like to split them using yq:
(source)
Basic example naming the files after metadata.name
kustomize build | yq --split-exp '.metadata.name + ".yaml"' --no-doc
Furthermore, it would be neat with some way to default to whitelisting/ignoring patches (maybe be checking the file name), such that patches do not fail because they do not follow the full spec. For example, this patch that adds a GHCR secret to the Flux HelmRelease, and allows it to use the host network:
You can use overlay-schemas to provide a patch schema to inject x-kubernetes-preserve-unknown-fields: true
for paths you want to ignore
kustomize
includes a command to render the YAMLs. Workflow would be this:kustomize build <input_folder> -o <rendered_output> kubectl-validate <rendered_output>
Given it is a single step in CI workflow Im not sure it makes sense to introduce a dependency for this
Note kustomize renders into a single output file, so you may like to split them using yq: (source) Basic example naming the files after metadata.name
kustomize build | yq --split-exp '.metadata.name + ".yaml"' --no-doc
I'll try this out and let you know when I have gained a feeling of the workflow. I would love the validation tool to work without much fiddling, and in this case, I do feel it would make sense to make a flag that allows the tool to pre-build Kustomize before validating files. This seems like a widespread use case; thus, having to do it beforehand seems counterintuitive. In any case, the error does not indicate the lack of Kustomize build to be the issue, so it is a bit hard to infer that from my perspective. What do you think?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.