Provide a way to identify operator generated resoruces
bergerx opened this issue · 7 comments
It could be helpful to identify the resources created by the controller. Currently some teams in our clusters are creating their own network policies and they may get confused with the new NetworkPolicies we are injecting into their namespaces. They don't have an easy way to identify how such resources are created.
The common method for such case is to place an ownerReferences to the generated object with the triggering resource's reference (e.g. NamespaceConfig). But this will likely impact the implementation of the NamespaceConfig resources' deletion since Kubernetes itself will also try to delete the owned objects once the owner resource (NamespaceConfig) is removed.
Other options could be adding an annotation/label.
cross-namespace owner referenced are disallowed by design, so that is not a viable option.
as far as labels go, you can put the labels yourself in the generated objects.
We hit the issue with the fact that the metadata field is in the default always ignore list of existing resources (https://github.com/redhat-cop/namespace-configuration-operator/tree/0c1fc135a068f1cebcbfe56d7dd9c4f0e278da13#excluded-paths). So this trick doesn't work with resources which are already created.
I'm not sure what you mean, could you make an example?
also please feel free to reopen this issue if necessary.
cross-namespace owner referenced are disallowed by design, so that is not a viable option.
It's correct that cross-namespace owner references are not allowed, but the configs are cluster-scoped. References to cluster-scoped objects are allowed (Docs).
I am seeing issues with resource cleanup when deleting namespace configs. Using the built-in cascade delete of Kubernetes would ensure the cleanup of leftover resources. Can we reopen this issue? @raffaelespazzoli
This has become kind of a pressing issue for us as a lot of resources are not deleted after the namespace label has been removed. We need to manually identify and delete these resources. The problem seems to have been introduced in one of the two latest patch releases.