Instable results. Sometimes it fails with 'unknown field'
jeroenvermeulen opened this issue · 8 comments
Strange but this error does not happen all the time.
How to repeat:
OS: macOS Sonama 14.1.1 (23B81)
Go: go version go1.21.3 darwin/arm64
kubectl-validate: both tested with v0.0.1 and latest(eacad4b)
git clone --branch v0.2.0 https://github.com/jeroenvermeulen/bunnycdn-operator.git
cd bunnycdn-operator
go install sigs.k8s.io/kubectl-validate@latest
~/go/bin/kubectl-validate test-cr.yaml --local-crds manifests/crds
~/go/bin/kubectl-validate test-cr.yaml --local-crds manifests/crds
~/go/bin/kubectl-validate test-cr.yaml --local-crds manifests/crds
~/go/bin/kubectl-validate test-cr.yaml --local-crds manifests/crds
Sometimes it fails
test-cr.yaml...ERROR
spec.browserCacheExpirationTime: Invalid value: value provided for unknown field
spec.cacheErrorResponses: Invalid value: value provided for unknown field
spec.cacheExpirationTime: Invalid value: value provided for unknown field
spec.cookieVaryNames: Invalid value: value provided for unknown field
spec.enableQueryStringSort: Invalid value: value provided for unknown field
spec.enableSmartCache: Invalid value: value provided for unknown field
spec.queryStringVaryParameters: Invalid value: value provided for unknown field
spec.stripResponseCookies: Invalid value: value provided for unknown field
spec.useStaleWhileOffline: Invalid value: value provided for unknown field
spec.useStaleWhileUpdating: Invalid value: value provided for unknown field
Error: validation failed
Sometimes it is OK
test-cr.yaml...OK
are you able to share the YAML this occurred with?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
So far I haven't been able to reproduce this:
❯ cd /Users/alex/Downloads/bunnycdn-operator-0.2.0\ 2/
❯ go install sigs.k8s.io/kubectl-validate@eacad4b241bbe7e557f6f04e5a9c310e5e79b0d3
go: downloading sigs.k8s.io/kubectl-validate v0.0.2-0.20231116230548-eacad4b241bb
go: downloading k8s.io/apimachinery v0.28.1
go: downloading k8s.io/apiserver v0.28.1
go: downloading k8s.io/apiextensions-apiserver v0.28.1
go: downloading k8s.io/client-go v0.28.1
go: downloading k8s.io/kube-openapi v0.0.0-20230816210353-14e408962443
go: downloading k8s.io/api v0.28.1
go: downloading go.opentelemetry.io/otel v1.17.0
go: downloading k8s.io/component-base v0.28.1
go: downloading go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.43.0
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.17.0
go: downloading go.opentelemetry.io/otel/sdk v1.17.0
go: downloading go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.17.0
go: downloading go.opentelemetry.io/otel/trace v1.17.0
go: downloading google.golang.org/grpc v1.57.0
go: downloading go.opentelemetry.io/otel/metric v1.17.0
go: downloading sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.1.4
❯ stress kubectl-validate test-cr.yaml --local-crds manifests/crds
5s: 1174 runs so far, 0 failures
10s: 2311 runs so far, 0 failures
15s: 3427 runs so far, 0 failures
20s: 4501 runs so far, 0 failures
25s: 5576 runs so far, 0 failures
30s: 6688 runs so far, 0 failures
35s: 7832 runs so far, 0 failures
40s: 8977 runs so far, 0 failures
45s: 10123 runs so far, 0 failures
50s: 11265 runs so far, 0 failures
55s: 12409 runs so far, 0 failures
1m0s: 13553 runs so far, 0 failures
1m5s: 14692 runs so far, 0 failures
1m10s: 15834 runs so far, 0 failures
1m15s: 17018 runs so far, 0 failures
1m20s: 18181 runs so far, 0 failures
1m25s: 19358 runs so far, 0 failures
1m30s: 20513 runs so far, 0 failures
1m35s: 21683 runs so far, 0 failures
1m40s: 22833 runs so far, 0 failures
1m45s: 23988 runs so far, 0 failures
1m50s: 25136 runs so far, 0 failures
1m55s: 26251 runs so far, 0 failures
2m0s: 27363 runs so far, 0 failures
2m5s: 28463 runs so far, 0 failures
2m10s: 29567 runs so far, 0 failures
This is on the commit linked, and on the latest commit, and 0.0.1 3b3ca3a
/remove-lifecycle stale
I am also not able to repeat it anymore
Thanks for updating, really appreciate it!
Signs of non-determinism is very concerning. This issue is especially mysterious since I can't reproduce from the same source commit. Unfortunately we can't get more information about this issue, so I will close it. If someone else runs into something similar we can open another issue.
/close
@alexzielenski: Closing this issue.
In response to this:
Thanks for updating, really appreciate it!
Signs of non-determinism is very concerning. This issue is especially mysterious since I can't reproduce from the same source commit. Unfortunately we can't get more information about this issue, so I will close it. If someone else runs into something similar we can open another issue.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.