thyarles/knsk

Rancher is impossible to delete

Closed this issue · 2 comments

At first, I tried removing Rancher by deleting their namespaces but 3 of those remained in terminating state.

Then I tried to remove them with their provided CLI tool (https://rancher.com/docs/rancher/v2.x/en/system-tools/#remove) but those 3 namespaces still stuck in terminating.

Then I tried to run ./knsk.sh --delete-all --force and it wasn't able to remove them either.

What should I do now? It seems something is above of all these tools.

$ ./knsk.sh --delete-all --force

Kubernetes NameSpace Killer

Checking if kubectl is configured... ok

Checking for unavailable apiservices... not found

Checking for stuck namespaces... found
.: Checking resources in namespace local... found
   > clusteralertgroup.management.cattle.io/cluster-scan-alert... error
   > clusteralertgroup.management.cattle.io/etcd-alert... error
   > clusteralertgroup.management.cattle.io/event-alert... error
   > clusteralertgroup.management.cattle.io/kube-components-alert... error
   > clusteralertgroup.management.cattle.io/node-alert... error
   > clusterroletemplatebinding.management.cattle.io/creator-cluster-owner... error
   > clusterroletemplatebinding.management.cattle.io/u-b4qkhsnliz-admin... error
   > node.management.cattle.io/machine-9sssc... error
   > node.management.cattle.io/machine-ks6z6... error
   > node.management.cattle.io/machine-v4v89... error
   > project.management.cattle.io/p-cnj28... error
   > project.management.cattle.io/p-mbvfd... error
.: Checking resources in namespace p-cnj28... found
   > projectalertgroup.management.cattle.io/projectalert-workload-alert... error
   > projectalertrule.management.cattle.io/less-than-half-workload-available... error
   > projectalertrule.management.cattle.io/memory-close-to-resource-limited... error
   > projectroletemplatebinding.management.cattle.io/app-jdnmz... error
   > projectroletemplatebinding.management.cattle.io/creator-project-owner... error
   > projectroletemplatebinding.management.cattle.io/prtb-s6fhc... error
   > projectroletemplatebinding.management.cattle.io/u-2gacgc4nfu-member... error
   > app.project.cattle.io/project-monitoring... error
.: Checking resources in namespace p-mbvfd... found
   > projectalertgroup.management.cattle.io/projectalert-workload-alert... error
   > projectalertrule.management.cattle.io/less-than-half-workload-available... error
   > projectalertrule.management.cattle.io/memory-close-to-resource-limited... error
   > projectroletemplatebinding.management.cattle.io/creator-project-owner... error
   > projectroletemplatebinding.management.cattle.io/u-efxo6n6ndd-member... error
   > app.project.cattle.io/cluster-alerting... error
   > app.project.cattle.io/cluster-monitoring... error
   > app.project.cattle.io/monitoring-operator... error
.: resources deleted, waiting to see if Kubernetes do a clean namespace deletion... ok      

Checking for stuck resources in the cluster... not found

Checking for orphan resources in the cluster... not found

Forcing deletion of stuck namespaces
.: Checking compliance of --force option... ok
.: Getting the access token to force deletion... ok
.: Starting kubectl proxy... ok
.: Checking for resisted stuck namespaces to force deletion... found
   > Forcing deletion of local... ok
   > Forcing deletion of p-cnj28... ok
   > Forcing deletion of p-mbvfd... ok
.: Stopping kubectl proxy... ok

:: Done in 588 seconds.

$ k get ns 
NAME                   STATUS        AGE
...
local                  Terminating   199d
p-cnj28                Terminating   199d
p-mbvfd                Terminating   199d
...

Okay, so here is what I did to manually remove them.

Grabbed all those resources with errors, kept only the unique ones, then ran this script:

for ns in local p-cnj28 p-mbvfd ; do
  for error in app.project.cattle.io/cluster-alerting app.project.cattle.io/cluster-monitoring app.project.cattle.io/monitoring-operator app.project.cattle.io/project-monitoring clusteralertgroup.management.cattle.io/cluster-scan-alert clusteralertgroup.management.cattle.io/etcd-alert clusteralertgroup.management.cattle.io/event-alert clusteralertgroup.management.cattle.io/kube-components-alert clusteralertgroup.management.cattle.io/node-alert clusterroletemplatebinding.management.cattle.io/creator-cluster-owner clusterroletemplatebinding.management.cattle.io/u-b4qkhsnliz-admin node.management.cattle.io/machine-9sssc node.management.cattle.io/machine-ks6z6 node.management.cattle.io/machine-v4v89 project.management.cattle.io/p-cnj28 project.management.cattle.io/p-mbvfd projectalertgroup.management.cattle.io/projectalert-workload-alert projectalertrule.management.cattle.io/less-than-half-workload-available projectalertrule.management.cattle.io/memory-close-to-resource-limited projectroletemplatebinding.management.cattle.io/app-jdnmz projectroletemplatebinding.management.cattle.io/creator-project-owner projectroletemplatebinding.management.cattle.io/prtb-s6fhc projectroletemplatebinding.management.cattle.io/u-2gacgc4nfu-member projectroletemplatebinding.management.cattle.io/u-efxo6n6ndd-member  ; do
    for resource in `kubectl get -n $ns $error -o name` ; do 
      kubectl patch -n $ns $resource -p '{"metadata": {"finalizers": []}}' --type='merge'
    done
  done
done

kubectl get ns 
# all gone

Hi @immanuelfodor,

Amazing! I will try to adjust the script to delete the resources in your way.

Thank you!