miracle2k/k8s-snapshots

Describe restore procedure

Closed this issue · 6 comments

k8s-snapshots works like a charm, thanks!

I am on GCP, and I am looking for a way to restore a disk and attach it to a pod.

Do you have a procedure that describes this?

I do not have anything, sorry. While I am not sure that restores are exactly within the scope of the project, I would definitely merge any pull request that adds this to the docs.

@leo-baltus have you come up with anything? Restore is basically as important as backup. I would submit a PR, but having just set up this service in my cluster, am now wondering: how do I restore :)?Googling for now, but wondering if anyone else has come up with a protocol.

@miracle2k thanks for this amazing tool!!!

I was just forced to learn how to do restores :). Here's what I did:

  • I created a new disk from the snapshot. I'm on GCE so it was like gcloud compute disks create --zone ZONE --source-snapshot SNAPSHOT
  • I considered mounting both the old and newly created drives on an instance and then manually copying/rsyncing data from the snapshot onto the original disk. But then I decided to try editing the PV to see if I could somehow just swap out new disk for the old disk. I edited spec.gcePersistentDisk.pdName—which worked! Not sure if this is ok, but seems to be fine.

@dminkovsky Thanks! Editing the manifest to change pdName is quite an interesting approach. I'm not sure either how legit that is, but it's interesting because it's something that we could easily automate.

Right now it seems like k8s-snapshots is still working after the PV edits,
but it’s creating snapshots for the new and old PVs.

Update: after deleting the old disks with gcloud, k8s-snapshots now only makes snapshots of the new (restored) disks.