Better support for encrypted files (for example SOPS)
devantler opened this issue · 6 comments
What would you like to be added?
When using encryption methods to encrypt files, it can add stuff that does not match the spec. One example is SOPS encrypted files:
// For example this encrypted dockerconfig needed to talk to GHCR.
apiVersion: v1
kind: Secret
metadata:
name: ghcr-auth
namespace: gha-runner-scale-set
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ENC[AES256_GCM,data:lOG4H51EuHtU93AGrKmgE1aKkjlPfi8zcGNHoRhaD/6p3HxYFoQMrYmlAAf1ut7J9s1l6Ab5vrGNe4db4d5pEEzh03Xdwy9rvbwsDNUDCzTCV2pANDYsNBqdiFXjetmkOB5TMmdCiKA9/H6EndRAnqSycwz4Om3ZeDA+ADK5G0aOapBInLWmve5vMMRIY5Dd4s3vNHQEc+clZdmJV9TjbskDl9jpZx5i0cSYM0Qq0+u/tTUSDYcfA5T3ob79SIGqOtNGg/Gf4aFtZyvMGqrhYZfCYD5/75+tIrgcqlIY8Y9wSfZT1r7a4VyftyUc7wV0hkBS8vMfqCCqCVrIBtXGTQniiSsMemQD,iv:mV73ISGWKtn3jd2PmqSELYvMGZKI6eeDOVYTwYEce3w=,tag:i+KsARSKVvCvDfZeY3uo9Q==,type:str]
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1jatx9ceun6ugkj6qd63ke0ar840h5hk8uxvq7nrf74amc30kagnszna2rc
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBOUlhsV2Jvb3dCNzZjdHdC
K1hjM1NpaFd0UVcyRFFBeXFkNzk4NTNjZWtJCmxOZU9xTW9hT0VONkdiZXgvcGJ0
aHNWS2lZcVR3TGZpS3FyR1FacFc1dFEKLS0tIGk3bG5KREZ4TTVLcmNHdnFUU290
cTdBSmRXVHFFU1l1TTFpQWI2anAzL2MK8BKEFv7ovsXC7fPcDXiY5xsO8CABpc0L
nBzoyf5D1JdwywpK2TJODAwSOTBVwO59w/TWzWo37zRDpHHsBTbWEw==
-----END AGE ENCRYPTED FILE-----
- recipient: age174v7lh96xmh286p46t90pgnl3ymnrzwe9y9vspd53cqgupvp8a0q5ng5ca
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB2NkM5NXVKUVAwTWVTd20w
aWdIOXlITTFrYkpEcWhCcTdhYmlxbWhGTTE4Ckhhc2tPZjZvckNteHljbWZ6SmQ1
bGFMYjVvWGhzekM1V2pjZ2FpWlltQWsKLS0tIG1tZFdsNCtSb3Z1R1hGOTVZZTFO
WVZXd1YyMHZ4SkxIU2ZYNFRGdjJRRmcKtjX2J7PpoHKDknNcby5v3PxT2wSgUrh8
W2RIA0eVRrbhFCAEnQEfwbKErXmTUczU2BthyY3AFCkd0qhT+6gkmg==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-03-08T13:58:35Z"
mac: ENC[AES256_GCM,data:R5ogzw3SoBWcrjawchoTfZlybuWvttoEgs8hn0zGk/X3ndZIgkS8HpxI2A8obbolcW8RqxPPSff2AbTTV4ZUQ6aPF28n3jABXTl6wYaN0LMV9OOcrZgUkOUzr0ZcdbsVspL2vMKm7zJlTBs/mNblpaZhEx2LmONVQ99+dbqTPbk=,iv:756dv8+aZjoRJ7kq77ftaMaL5jyrPEYXC7XwmmD9mH8=,tag:jxM8wbNXII6hFHVuhFXVUQ==,type:str]
pgp: []
encrypted_regex: ^(data|stringData)$
version: 3.8.1
Why is this needed?
I would expect this file to be ignored, or at least that the SOPS-related stuff is not included in the validation.
Maybe the user can be allowed to add glob patterns to ignore files or sections of a file from validation?
Seems similar to #80
I'm not familiar with SOPS, is this a preprocessor?
SOPS means Secrets OPerationS and is a tool that is quite popular when using GitOps. It allows to encrypt YAML files at rest, without needing an agent in the cluster to decrypt the format.
The files are decrypted by GitOps tooling itself through official support. For example with Flux GitOps: https://www.google.com/search?client=safari&rls=en&q=flux+sops&ie=UTF-8&oe=UTF-8#ip=1
It does the decryption before applying the YAML manifest to the cluster, so I assume it is a preprocessor yes. The option of having to add x-kubernetes-preserve-unknown-fields
is something I believe can solve the issue, but it does add some overhead for developers/operators, as the SOPS encryption does not add this label by default. Maybe Kubernetes-sigs can promote this approach somehow for third parties that rely on preprocessing, so it can work out-of-the-box when third-party tools have added support for it. I do understand that this is likely not something kubectl-validate should handle, as it would be very hard to maintain :-)
For now I will add a reminder to inform the SOPS community that adding x-kubernetes-preserve-unknown-fields
during decryption can be valuable for this new and upcoming validation CLI :-)
Consider this issue solved with #80.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.