mittwald/kubernetes-replicator

Implement a way to remove "push-based" replicated secrets when the source secret replication config changes

Opened this issue · 3 comments

Problem I am having

Say I have a secret in namespace1 that has been replicated to namespace2 with the replicator.v1.mittwald.de/replicate-to: "namespace2" anotation. Now I decide that I do not want my secret replicated to namespace2 anymore.

  • If I remove the replicator.v1.mittwald.de/replicate-to: "namespace2" annotation completely, the replicated secret remains in namespace2.
  • If I change the annotation to an empty string (replicator.v1.mittwald.de/replicate-to: ""), the replicated secret also remains in namespace2.
  • If I change the annotation to replicate to another namespace (replicator.v1.mittwald.de/replicate-to: "namespace3"), the replicated secret also remains in namespace2.

I did not find any way to delete the secret unless I manually delete the ressource with kubectl delete.

This behavior creates artifacts (old replicated secrets) that remain on the cluster and are not easy to track down and remove.

Describe the solution you'd like

I would like the replicated secret to be automatically remove from namespace2 if I remove the namespace name from the replicator.v1.mittwald.de/replicate-to label.

This could possibly be done by comparing the current value of the replicator.v1.mittwald.de/replicate-to annotation with the new value to apply, allowing us to extract the namespaces that have been removed from the list.

For example, the behavior if I would change the label to the following values would be:

  • replicator.v1.mittwald.de/replicate-to: "namespace2,namespace3" : Would replicate the secret to namespace2 and namespace3
  • replicator.v1.mittwald.de/replicate-to: "namespace3" : Would remove the replicated secret in namespace2
  • replicator.v1.mittwald.de/replicate-to: "" : Would remove the replicated secret in namespace3

This would simplify a lot the secret management when using push-based replication.

sebv7 commented

Would also love to have this feature

@Totalus @sebv7 Do you have any progress on this?

sebv7 commented

@Totalus @sebv7 Do you have any progress on this?

Not from my side. Still waiting for the feature.