I wrote this quick and dirty bash script to export Kubernetes resources from a cluster in order to migrate to a new one. In my case we had to migrate multiple self-hosted Kubernetes cluster (including EBS PVCs) to AWS EKS.
Since on one hand the --export kubectl parameter is depreciated and on the other hand the goal from that server side export was more focused on a namespace migration on the same cluster (namespace removed while keeping for example clusterips) I decided to use the yaml export and remove unnecessary keys with yq.
Right now not all resources types are implemented.
available_resources=(
pvc
sts
deploy
svc
cm
ing
secrets
limits
quota
roles
rolebindings
job
)
available_cluster_resources=(
sc
pv
psp
clusterroles
clusterrolebindings
)
USAGE: k8s-export.sh {-n namespace [-c new-namespace] | -g} [-h] [-k kubeconfig] [-r resource-1] [-r resource-2] [-i inputfile (e.g. deploy)] [-i pvc]
This script will export kubernetes resource configs the exported resources can be limited with -r to a specefic resource type (e.g. -r pvc -r sv). And can be limited with -i to a subset of resources in that namespace. Arguments -n: set source to namespace -g: set source to cluster -r: limit resources to specified types (can be repeated multiple times) -i: limit exported resources to input file (new line serparted) the input file must match the name of the resource (can be repeated multiple times) -c: change namespace to specified value -k: path to kubectl kubeconfig file By not setting -r all resources will be exported.
./k8s-cluster-export -n test-ns
./k8s-cluster-export -n test-ns -k kube.yml
./k8s-cluster-export -g
./k8s-cluster-export -n old-ns -c new-ns
./k8s-cluster-export -n test-ns -r pvc
This limitation is possible for all available resource types. The input file must match the resources type.
./k8s-cluster-export -n test-ns -r pvc -i pvc -r deploy -i deploy
deploy file
test-deploy-name1
test-deploy-name2
pvc file
pvc-name-1
pvc-name-2
Disclosure: even if this is working in my case, this has to be tested carfully. Since Kubernetes has the permission to delete you're EBS volumes such a migration should be planned with great caution.
./k8s-cluster-export -n test-ns
for i in */5_deploy/*; do
deploy=$(yq r $i metadata.name)
kubectl scale deployment --replicas 0 $deploy
done
for i in */6_sts/*; do
sts=$(yq r $i metadata.name)
kubectl scale statefulset --replicas 0 $sts
done
for i in /2_pv/; do vol=$(yq r $i spec.awsElasticBlockStore.volumeID | cut -f4 -d'/') echo $vol aws ec2 describe-volumes --volume-id $vol | jq -r ".Volumes[].State" done
OLD_CLUSTER=old-cluster-123
NEW_CLUSTER=new-cluster-123
for i in */2_pv/*; do
vol=$(yq r $i spec.awsElasticBlockStore.volumeID | cut -f4 -d'/')
aws ec2 delete-tags --resources $vol --tags Key=kubernetes.io/cluster/$OLD_CLUSTER
aws ec2 delete-tags --resources $vol --tags Key=KubernetesCluster
aws ec2 create-tags --resources $vol --tags Key=kubernetes.io/cluster/$NEW_CLUSTER,Value=owned
done
for i in */2_pv/*; do
vol=$(yq r $i spec.awsElasticBlockStore.volumeID | cut -f4 -d'/')
aws ec2 create-snapshot --volume-id $vol --description before-migration >> snap-output
done
for i in $(cat snap-output | jq -r ".SnapshotId"); do
echo $i
aws ec2 describe-snapshots --snapshot-ids $i | jq -r ".Snapshots[].State"
done
Take care to import PVCs first followed by PVs.
kubectl apply -f default/1_pvc/.
kubectl apply -f default/2_pv/.
check the pvc status and deploy all other resources after that.
kubectl apply -f default/3_cm/.
...
kubectl apply -f default/8_ing/.
./k8s-cluster-export -n old-ns -c new-ns
See cluster migration
for i in */2_pv/*; do
pv=$(yq r $i metadata.name)
kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
done
for i in */1_pvc/*; do
pvc=$(yq r $i metadata.name)
kubectl delete pvc -n old-ns $pvc
done
Check if the PVs are all in state Released and remove the claimRef from the PV
for i in */2_pv/*; do
pv=$(yq r $i metadata.name)
k get pv $pv
done
for i in */2_pv/*; do
pv=$(yq r $i metadata.name)
kubectl patch pv $pv --type=json -p='[{"op": "remove", "path": "/spec/claimRef"}]'
done
Don't reimporte the PVs because they are still there
kubectl apply -f old-ns/1_pvc/.
kubectl apply -f old-ns/5_deploy/.
...