Should the Operator be NameSpaced or Cluster
ArangoGutierrez opened this issue · 10 comments
Per definition
When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you specify. The CRD can be either namespaced or cluster-scoped, as specified in the CRD's scope field. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.
currently the operator runs namespaced, what do we think about it being Cluster-scoped?
what do we think about it being Cluster-scoped?
Node
objects are non-namespaced too so I think a cluster scoped NFD CRD would be better aligned with that.
what do we think about it being Cluster-scoped?
Node
objects are non-namespaced too so I think a cluster scoped NFD CRD would be better aligned with that.
Hmm, now thinking of this, I'm not really sure 🙃 Node objects sure are non-namespaced but OTOH the operator always runs in some namespace. I'd probably stay with namespaced. Makes cleaning up kind of easier: deleting the namespace should make sure that no old configs haunt you in the future. Thoughts? @zvonkok?
With namespaced CRDs the operator needs to watch all those namespaces to CRUD an NFD instance in each of them. What I've understood and it was also mentioned by @zvonkok in kubernetes-sigs/node-feature-discovery#508 (comment) that one instance of NFD in a cluster is preferred. AFAIU with that it would be simpler to watch cluster-scoped CRDs. Alternatively, the operator could watch only one namespace (WATCH_NAMESPACE
?) but it'd still be possible to create orphan CRDs...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
as NFD support the instance flag, we may run multiple instance of NFD, it should be reasonable to be namespaced.
for example, we may leave different people managing their features and isolated by both label namespace and k8s namespaces, eg. gpu-nfd for nvidia.com under gpu-ops ns, network-nfd for network.example.com network-ops ns.
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
As per #114 this is now fully documented
/close
@ArangoGutierrez: Closing this issue.
In response to this:
As per #114 this is now fully documented
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.