Make kubectl-validate compatible with WASM
eddycharly opened this issue · 7 comments
Currently it's not possible to use kubectl-validate and build a wasm module because of dependencies that bring a reference to etcd (that has non supported syscalls).
GOOS=js GOARCH=wasm go build -o ./main.wasm ./main.go
package command-line-arguments
imports sigs.k8s.io/kubectl-validate/pkg/cmd
imports k8s.io/apiextensions-apiserver/pkg/apiserver
imports k8s.io/apiextensions-apiserver/pkg/registry/customresource
imports k8s.io/apiserver/pkg/registry/generic
imports k8s.io/apiserver/pkg/storage/storagebackend/factory
imports go.etcd.io/etcd/client/pkg/v3/transport
imports golang.org/x/sys/unix: build constraints exclude all Go files in /Users/charled/Documents/dev/eddycharly/kubectl-validate/vendor/golang.org/x/sys/unix
I think there's no reason why it should not be possible, probably requires to change packages organisation a little bit though to avoid the package level reference to etcd ?
Yeah this is mainly due to our reliance on apiextensions-apiserver
to validate CRDs. THey ahve recursive schemas which the k8s validators choke on.
If we remove the ability to validate CRDs by schema and just skip/error on CRDs then this requirement can be removed.
Curious about people's thoughts on that proposition
@alexzielenski do you suggest removing CRD support ?
IIRC code like this was part of the problem return rest.BeforeCreate(strat, request.WithNamespace(context.TODO(), ""), obj)
@alexzielenski do you suggest removing CRD support ?
I am waffling about it. It is the only resource we cheat for validation on and it would be a lot of work to support its recursive schema. I wonder if users would miss it
ike this was part of the problem
return rest.BeforeCreate(strat, request.WithNamespace(context.TODO(), ""), obj)
Ah, if that is the case that can also be quite easily removed & replicated within our own codebase
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
Took another look at this. We rely on a lot more code than just ValidateCustomResourceDefinition inside apiextensions-apiserver (all CEL validaiton lives there, as well as structural schema definitions and ratcheting validators). Unfortunately itd be quite a large refactor to move them out of that package upstream, and I would rather keep the dependency to ensure the code says the same.