Unable to Bind Existing Namespace to Subnamespace Anchor
Closed this issue · 8 comments
Description:
I'm encountering an issue while attempting to bind an existing namespace to a new subnamespace-anchor resource using the kubectl hns set command in our Kubernetes cluster configured with hierarchical-namespaces.
First attempt
Steps to Reproduce:
- Create a new subnamespace-anchor on an already existing root namespace.
Expected Behavior:
HNC creates the subnamespace-anchor and binds it to the existing namespace if:
- Resource configuration matches the current tree or new configuration is applicable.
- The namespace is unreferenced by another subnamespace-anchor.
Actual Behavior:
Creating the Subnamespace Anchor while the namespace exists results in exceptions. The process produces the following error messages:
{"level":"info","ts":1709888268.4449706,"logger":"anchor.validate","msg":"Denied","ns":"test-parent","nm":"test-child","op":"CREATE","user":"test@test.com","code":409,"reason":"Conflict","message":"Operation cannot be fulfilled on subnamespaceanchors.hnc.x-k8s.io \"test-child\": cannot create a subnamespace using an existing namespace"}
{"level":"info","ts":1709888281.9173489,"logger":"hierarchyconfig.reconcile","msg":"Namespace has changed","rid":281,"ns":"test-child"}
{"level":"info","ts":1709888281.9246855,"logger":"namespace.validate","msg":"Denied","nm":"test-child","op":"UPDATE","user":"system:serviceaccount:test-parent-hnc-system:default","code":403,"reason":"Forbidden","message":"namespaces \"test-child\" is forbidden: cannot set or modify tree label \"test-child.tree.hnc.x-k8s.io/depth\" in namespace \"test-child\"; these can only be managed by HNC"}
{"level":"error","ts":1709888281.9309719,"logger":"hierarchyconfig.reconcile","msg":"while updating apiserver","rid":281,"ns":"test-child","error":"admission webhook \"namespaces.hnc.x-k8s.io\" denied the request: namespaces \"test-child\" is forbidden: cannot set or modify tree label \"test-child.tree.hnc.x-k8s.io/depth\" in namespace \"test-child\"; these can only be managed by HNC"}
{"level":"error","ts":1709888281.9310033,"logger":"controller.hierarchyconfiguration","msg":"Reconciler error","reconciler group":"hnc.x-k8s.io","reconciler kind":"HierarchyConfiguration","name":"hierarchy","namespace":"test-child","error":"admission webhook \"namespaces.hnc.x-k8s.io\" denied the request: namespaces \"test-child\" is forbidden: cannot set or modify tree label \"test-child.tree.hnc.x-k8s.io/depth\" in namespace \"test-child\"; these can only be managed by HNC"}
Second attempt
Steps to Reproduce:
kubectl hns set child --parent parent
Expected Behavior:
The kubectl hns set command should create/update a subnamespace-anchor resource in the SubnamespaceAnchors CRs after updating the tree.
Actual Behavior:
The kubectl hns set command doesn't create/update a subnamespace-anchor resource in the SubnamespaceAnchors CRs after updating the tree.
I ended up going with a dirty workaround attempt:
To address this issue, I attempted the following workaround:
- Use the
kubectl hns setcommand to set an existing namespace to its parent. - Deactivate the webhooks.
- Create the subnamespace anchors.
- Add the annotation "hnc.x-k8s.io/subnamespace-of=test-parent".
- Activate the webhooks again.
Result:
While this approach resolves cosmetic problems, issues persist with the finalizers when attempting to delete a subnamespace-anchor created in this way.
Question:
How can I bind an existing namespace to a new subnamespace-anchor resource? Is there any possibility that the kubectl hns set command would update the subnamespace-anchors resources or create them if they don't exist?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.