vmware-tanzu/velero

Use Velero can not to restore inside pv data

Closed this issue · 36 comments

For I try to restore the backup item,all pvs,pvcs can be find,but data not found
something like this
1733305781965
command:
deploy:
velero install --provider aws --use-node-agent --privileged-node-agent --plugins velero-plugin-for-aws:v1.4.1,velero-plugin-for-csi:v0.4.2 --bucket prod-backup --secret-file /root/middleware/velero/velero-auth.txt --namespace velero --image velero:v1.13.1 --snapshot-location-config region=minio --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://192.168.127.131:9000 --uploader-type restic

backup:
velero backup create cluster-backup888 --include-namespaces prod --include-resources persistentvolumeclaims,persistentvolumes --item-operation-timeout 600s --default-volumes-to-fs-backup
restore:
velero restore create redis-restore --from-backup cluster-backup888
1733305977696
1733305977700

How can I do... ....

Velero does not restic restore data to volumes when backups do not include pods mounting those volumes.

Velero does not restic restore data to volumes when backups do not include pods mounting those volumes.

So....command like this?
velero backup create cluster-857 --include-namespaces prod --include-resources persistentvolumeclaims,persistentvolumes.pods,serviceaccounts,secrets --item-operation-timeout 600s --default-volumes-to-fs-backup

Velero does not restic restore data to volumes when backups do not include pods mounting those volumes.

About when I try to backup namespaces by prod,the restore schedule not completed
command:
velero backup create cluster-859 --include-namespaces prod
velero restore create redis-result --from-backup cluster-859

[root@k8s-master-prod data]# cd prod-redis-data-redis-cluster-0-pvc-3e40d814-af8b-4c75-b602-1bb28c355d63/
[root@k8s-master-prod prod-redis-data-redis-cluster-0-pvc-3e40d814-af8b-4c75-b602-1bb28c355d63]# ls
[root@k8s-master-prod prod-redis-data-redis-cluster-0-pvc-3e40d814-af8b-4c75-b602-1bb28c355d63]#

Secondly velero does not override existing volume.

You have to delete PVC/pv for volume to be restored with data.

Secondly velero does not override existing volume.

You have to delete PVC/pv for volume to be restored with data.
Delete PVC/pv already, and delete pv folders from storage backend also, my kubernetes storage is nfs-client-provisioner.

And try use v1.15.0 to backup resources like this:
Name: cluster1
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout=10m0s
velero.io/source-cluster-k8s-gitversion=v1.28.1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=28

Phase: Failed (run velero backup logs cluster1 for more information)

Namespaces:
Included: prod
Excluded:

Resources:
Included: persistentvolumeclaims, persistentvolumes.pods, serviceaccounts, secrets
Excluded:
Cluster-scoped: auto

Label selector:

Or label selector:

Storage Location: default

Velero-Native Snapshot PVs: true
Snapshot Move Data: false
Data Mover: velero

TTL: 720h0m0s

CSISnapshotTimeout: 10m0s
ItemOperationTimeout: 10m0s

Hooks:

Backup Format Version: 1.1.0

Started: 2024-12-05 14:15:25 +0800 CST
Completed: <n/a>

Expiration: 2025-01-04 14:15:25 +0800 CST

Backup Volumes:
<error getting backup volume info: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline>

command:
velero install --provider aws --use-node-agent --privileged-node-agent --plugins velero-plugin-for-aws:v1.4.1 --bucket prod-backup --secret-file /root/middleware/velero/velero-auth.txt --namespace velero --image velero:v1.15.0 --use-volume-snapshots --snapshot-location-config region=minio --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://192.168.127.131:9000 --uploader-type restic

velero backup create cluster1 --include-namespaces prod --include-resources persistentvolumeclaims,persistentvolumes.pods,serviceaccounts,secrets --item-operation-timeout 600s --snapshot-volumes=true --default-volumes-to-fs-backup=true

time="2024-12-05T06:40:53Z" level=error msg="Error getting backup store for this location" backupLocation=velero/default controller=backup-sync error="rpc error: code = Unknown desc = Invalid s3 url http://192.168.127.131:9000\u00a0--uploader-type, URL must be valid according to https://golang.org/pkg/net/url/#Parse and start with http:// or https://" error.file="/go/src/velero-plugin-for-aws/velero-plugin-for-aws/object_store.go:255" error.function=main.newAWSConfig logSource="pkg/controller/backup_sync_controller.go:103"
time="2024-12-05T06:40:53Z" level=info msg="plugin process exited" backupLocation=velero/default cmd=/plugins/velero-plugin-for-aws controller=backup-sync id=93 logSource="pkg/plugin/clientmgmt/process/logrus_adapter.go:80" plugin=/plugins/velero-plugin-for-aws
time="2024-12-05T06:40:53Z" level=error msg="Error getting a backup store" backup-storage-location=velero/default controller=backup-storage-location error="rpc error: code = Unknown desc = Invalid s3 url http://192.168.127.131:9000\u00a0--uploader-type, URL must be valid according to https://golang.org/pkg/net/url/#Parse and start with http:// or https://" error.file="/go/src/velero-plugin-for-aws/velero-plugin-for-aws/object_store.go:255" error.function=main.newAWSConfig logSource="pkg/controller/backup_storage_location_controller.go:138"
time="2024-12-05T06:40:53Z" level=info msg="BackupStorageLocation is invalid, marking as unavailable" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
time="2024-12-05T06:40:53Z" level=error msg="Current BackupStorageLocations available/unavailable/unknown: 0/1/0, BackupStorageLocation "default" is unavailable: rpc error: code = Unknown desc = Invalid s3 url http://192.168.127.131:9000\u00a0--uploader-type, URL must be valid according to https://golang.org/pkg/net/url/#Parse and start with http:// or https://)" controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:179"
time="2024-12-05T06:40:53Z" level=info msg="plugin process exited" backup-storage-location=velero/default cmd=/plugins/velero-plugin-for-aws controller=backup-storage-location id=101 logSource="pkg/plugin/clientmgmt/process/logrus_adapter.go:80" plugin=/plugins/velero-plugin-for-aws

Secondly velero does not override existing volume.

You have to delete PVC/pv for volume to be restored with data.

By the way,one more ask about the how to change this images for restore steps:
402aa69c1a7d00569778664efbe3199

@klllmxx
create this configmap

apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: fs-restore-action-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restore
    # item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/pod-volume-restore: RestoreItemAction
data:
  # The value for "image" can either include a tag or not;
  # if the tag is *not* included, the tag from the main Velero
  # image will automatically be used.
  image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]

@klllmxx create this configmap

apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: fs-restore-action-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restore
    # item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/pod-volume-restore: RestoreItemAction
data:
  # The value for "image" can either include a tag or not;
  # if the tag is *not* included, the tag from the main Velero
  # image will automatically be used.
  image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]

After deploy velero then deploy this one is best or having other way try?and need before deploy velero set this one before install step nessessary?

imo create the config map then create/install velero deployment

imo create the config map then create/install velero deployment

ammm...my deployment using for crds with command...no more using helm...

also about restore step must be having pods informations?

also about restore step must be having pods informations?

if your backup is specific enough to your workload such as only your intended namespaces, you can just restore from that backup without specifying anything else. Otherwise you can filter what gets restored by namespaces, resource types.

Also, do I have to have pods information about the recovery steps?

If your backup is specific enough for your workload, such as only your expected namespace, you can simply restore from that backup without specifying anything else. Otherwise, you can filter the content to be restored by namespace, resource type.
See this link
https://techdocs.broadcom.com/us/en/vmware-tanzu/application-catalog/tanzu-application-catalog/services/tac-doc/apps-tutorials-backup-restore-data-redis-cluster-kubernetes-index.html
Using for velero to doing restore step and successfully restore the persistent data,no need pods、configmaps and so on,for now not sucessessfuly doing restore job,I think that is problem.

Command
velero restore create 11 --from-backup 66dasc --include-resources=PersistentVolumeClaim,PersistentVolum

for now not sucessessfuly doing restore job,I think that is problem.

run velero restore describe <restore-name> --details

FULL BACKUP AND RESTORE INFO DOWN BELOW↓
[root@k8s-master-prod data]# velero backup describe 66dasc --details
Name: 66dasc
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout=10m0s
velero.io/source-cluster-k8s-gitversion=v1.28.1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=28

Phase: Completed

Namespaces:
Included: prod
Excluded:

Resources:
Included: *
Excluded:
Cluster-scoped: auto

Label selector: app.kubernetes.io/name=redis-cluster

Or label selector:

Storage Location: default

Velero-Native Snapshot PVs: true
Snapshot Move Data: false
Data Mover: velero

TTL: 720h0m0s

CSISnapshotTimeout: 10m0s
ItemOperationTimeout: 10m0s

Hooks:

Backup Format Version: 1.1.0

Started: 2024-12-06 14:58:23 +0800 CST
Completed: 2024-12-06 14:58:41 +0800 CST

Expiration: 2025-01-05 14:58:23 +0800 CST

Total items to be backed up: 33
Items backed up: 33

Resource List:
apps/v1/ControllerRevision:
- prod/redis-cluster-7f89475f94
apps/v1/StatefulSet:
- prod/redis-cluster
discovery.k8s.io/v1/EndpointSlice:
- prod/redis-cluster-css5l
- prod/redis-cluster-headless-kjwtg
networking.k8s.io/v1/NetworkPolicy:
- prod/redis-cluster
policy/v1/PodDisruptionBudget:
- prod/redis-cluster
v1/ConfigMap:
- prod/redis-cluster-default
- prod/redis-cluster-scripts
v1/Endpoints:
- prod/redis-cluster
- prod/redis-cluster-headless
v1/Namespace:
- prod
v1/PersistentVolume:
- pvc-00bb4e24-f73d-4a76-ab76-c7847a6acaba
- pvc-0c619afb-f5ce-4894-b0d5-a269b5f29efc
- pvc-6bcc0b9c-e659-446c-9b17-696dcf96077d
- pvc-7b0df581-982a-4ad2-ac66-60f54d9dad9b
- pvc-7c0923dc-8259-4aa0-978f-3296df1ab5d9
- pvc-85efcd42-b9e0-4ed7-bf19-39428abfc13f
v1/PersistentVolumeClaim:
- prod/redis-data-redis-cluster-0
- prod/redis-data-redis-cluster-1
- prod/redis-data-redis-cluster-2
- prod/redis-data-redis-cluster-3
- prod/redis-data-redis-cluster-4
- prod/redis-data-redis-cluster-5
v1/Pod:
- prod/redis-cluster-0
- prod/redis-cluster-1
- prod/redis-cluster-2
- prod/redis-cluster-3
- prod/redis-cluster-4
- prod/redis-cluster-5
v1/Secret:
- prod/redis-cluster
v1/Service:
- prod/redis-cluster
- prod/redis-cluster-headless
v1/ServiceAccount:
- prod/redis-cluster

Backup Volumes:
Velero-Native Snapshots:

CSI Snapshots:

Pod Volume Backups - kopia:
Completed:
prod/redis-cluster-0: empty-dir, redis-data
prod/redis-cluster-1: empty-dir, redis-data
prod/redis-cluster-2: empty-dir, redis-data
prod/redis-cluster-3: empty-dir, redis-data
prod/redis-cluster-4: empty-dir, redis-data
prod/redis-cluster-5: empty-dir, redis-data

HooksAttempted: 0
HooksFailed: 0

[root@k8s-master-prod data]# velero restore create 225 --from-backup 66dasc --include-resources=pv,pvc
Restore request "225" submitted successfully.
Run velero restore describe 225 or velero restore logs 225 for more details.

[root@k8s-master-prod data]# velero restore describe 225 --details
Name: 225
Namespace: velero
Labels:
Annotations:

Phase: Completed
Total items to be restored: 12
Items restored: 12

Started: 2024-12-10 10:27:21 +0800 CST
Completed: 2024-12-10 10:27:21 +0800 CST

Backup: 66dasc

Namespaces:
Included: all namespaces found in the backup
Excluded:

Resources:
Included: pv, pvc
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io, csinodes.storage.k8s.io, volumeattachments.storage.k8s.io, backuprepositories.velero.io
Cluster-scoped: auto

Namespace mappings:

Label selector:

Or label selector:

Restore PVs: auto

CSI Snapshot Restores:

Existing Resource Policy:
ItemOperationTimeout: 4h0m0s

Preserve Service NodePorts: auto

Uploader config:

HooksAttempted: 0
HooksFailed: 0

Resource List:
v1/PersistentVolume:
- pvc-00bb4e24-f73d-4a76-ab76-c7847a6acaba(skipped)
- pvc-0c619afb-f5ce-4894-b0d5-a269b5f29efc(skipped)
- pvc-6bcc0b9c-e659-446c-9b17-696dcf96077d(skipped)
- pvc-7b0df581-982a-4ad2-ac66-60f54d9dad9b(skipped)
- pvc-7c0923dc-8259-4aa0-978f-3296df1ab5d9(skipped)
- pvc-85efcd42-b9e0-4ed7-bf19-39428abfc13f(skipped)
v1/PersistentVolumeClaim:
- prod/redis-data-redis-cluster-0(created)
- prod/redis-data-redis-cluster-1(created)
- prod/redis-data-redis-cluster-2(created)
- prod/redis-data-redis-cluster-3(created)
- prod/redis-data-redis-cluster-4(created)
- prod/redis-data-redis-cluster-5(created)
-
CLUSTER RESOURECES
[root@k8s-master-prod data]# kubectl get pvc -n prod
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-redis-cluster-0 Bound pvc-91c4f61f-328e-49e3-94ae-317e9fe15355 8Gi RWO app-storage 5m17s
redis-data-redis-cluster-1 Bound pvc-8534be7e-14ca-49c9-9aaf-554c3fe2ba9b 8Gi RWO app-storage 5m17s
redis-data-redis-cluster-2 Bound pvc-a639a62d-6214-409b-b470-e6f447a41203 8Gi RWO app-storage 5m17s
redis-data-redis-cluster-3 Bound pvc-0f1f6c3a-c351-4071-9ad5-081fa174777e 8Gi RWO app-storage 5m17s
redis-data-redis-cluster-4 Bound pvc-e9c3ecf9-6384-4934-aeb8-4d20c72c883f 8Gi RWO app-storage 5m17s
redis-data-redis-cluster-5 Bound pvc-5e73d7da-fad2-4965-bd9a-ea75895f4dba 8Gi RWO app-storage 5m17s
[root@k8s-master-prod data]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0c0f8c2d-d421-43b6-8e17-c95aae5244af 1Gi RWX Retain Bound default/predixy-pvc-nfs app-storage 40d
pvc-0ec17689-7d5b-4d14-acd0-af3ee1562ab5 2Gi RWO Delete Bound lianhe/broker-storage-rocketmq-broker-replica-id2-1 app-storage 49d
pvc-0f1f6c3a-c351-4071-9ad5-081fa174777e 8Gi RWO Retain Bound prod/redis-data-redis-cluster-3 app-storage 5m22s
pvc-54b6584b-5056-4096-8ec8-9a6671dbdbb9 2Gi RWO Delete Bound lianhe/broker-storage-rocketmq-broker-master-1 app-storage 49d
pvc-5e73d7da-fad2-4965-bd9a-ea75895f4dba 8Gi RWO Retain Bound prod/redis-data-redis-cluster-5 app-storage 5m22s
pvc-64196458-8723-4aba-b06c-1b12b48bdde6 2Gi RWO Delete Bound lianhe/broker-storage-rocketmq-broker-replica-id1-1 app-storage 49d
pvc-6f1489f1-3299-400b-b054-2ff1a7bc47e4 2Gi RWO Delete Bound lianhe/nameserver-storage-rocketmq-nameserver-0 app-storage 49d
pvc-8534be7e-14ca-49c9-9aaf-554c3fe2ba9b 8Gi RWO Retain Bound prod/redis-data-redis-cluster-1 app-storage 5m25s
pvc-91c4f61f-328e-49e3-94ae-317e9fe15355 8Gi RWO Retain Bound prod/redis-data-redis-cluster-0 app-storage 5m25s

NFS SERVER
[root@k8s-master-prod data]# ls -alh
drwxr-xr-x 10 root root 16K 12 10 10:27 .
dr-xr-xr-x. 18 root root 256 10 15 11:05 ..
drwxr-xr-x 213 root root 20K 12 10 10:25 backup
drwxr-xr-x 2 root root 4.0K 12 10 08:00 default-predixy-pvc-nfs-pvc-0c0f8c2d-d421-43b6-8e17-c95aae5244af
drwxrwxrwx 2 root root 6 12 10 10:27 prod-redis-data-redis-cluster-0-pvc-91c4f61f-328e-49e3-94ae-317e9fe15355
drwxrwxrwx 2 root root 6 12 10 10:27 prod-redis-data-redis-cluster-1-pvc-8534be7e-14ca-49c9-9aaf-554c3fe2ba9b
drwxrwxrwx 2 root root 6 12 10 10:27 prod-redis-data-redis-cluster-2-pvc-a639a62d-6214-409b-b470-e6f447a41203
drwxrwxrwx 2 root root 6 12 10 10:27 prod-redis-data-redis-cluster-3-pvc-0f1f6c3a-c351-4071-9ad5-081fa174777e
drwxrwxrwx 2 root root 6 12 10 10:27 prod-redis-data-redis-cluster-4-pvc-e9c3ecf9-6384-4934-aeb8-4d20c72c883f
drwxrwxrwx 2 root root 6 12 10 10:27 prod-redis-data-redis-cluster-5-pvc-5e73d7da-fad2-4965-bd9a-ea75895f4dba
[root@k8s-master-prod data]# ls -alh prod-redis-data-redis-cluster-0-pvc-91c4f61f-328e-49e3-94ae-317e9fe15355
drwxrwxrwx 2 root root 6 12 10 10:27 .
drwxr-xr-x 10 root root 16K 12 10 10:27 ..

for now not sucessessfuly doing restore job,I think that is problem.

run velero restore describe <restore-name> --details

So for now having notbad solution doing this one?

Phase: Completed

Looked good right? What's the issue now?

Phase: Completed

Looked good right? What's the issue now?

No more pv persistence data but otherthing can be back man
Can you see link:https://techdocs.broadcom.com/us/en/vmware-tanzu/application-catalog/tanzu-application-catalog/services/tac-doc/apps-tutorials-backup-restore-data-redis-cluster-kubernetes-index.html?
and my full steps ?

I can see, but I don't use helm or tanzu so I don't have specifics whether it should work or not.

I can see, but I don't use helm or tanzu so I don't have specifics whether it should work or not.

1734493790709
Watch point sir!I give this link actually talking about using velero to restore redis-cluster man!And my kubernetes csi storages is nfs! What is the relationship with Helm or Tanzu?

I can see, but I don't use helm or tanzu so I don't have specifics whether it should work or not.

1734493790709 Watch point sir!I give this link actually talking about using velero to restore redis-cluster man!And my kubernetes csi storages is nfs! What is the relationship with Helm or Tanzu?

By the way,this link only backup pv and pvc also success restore pv persistence data.

What is the relationship with Helm or Tanzu?

The article you linked
mentions

You have configured Helm to use the Tanzu Application Catalog chart repository following the instructions for Tanzu Application Catalog.
You have previously deployed the Redis Cluster Helm chart on the source cluster and added some data to it. Example command

I am not familiar with Redis, and I do not have access to Tanzu Platform cloud services account per the prerequisites of the document. I will not be able to reproduce your issue unfortunately. Perhaps other maintainers more familiar with the document can help.

What is the relationship with Helm or Tanzu?

The article you linked mentions

You have configured Helm to use the Tanzu Application Catalog chart repository following the instructions for Tanzu Application Catalog.
You have previously deployed the Redis Cluster Helm chart on the source cluster and added some data to it. Example command

I am not familiar with Redis, and I do not have access to Tanzu Platform cloud services account per the prerequisites of the document. I will not be able to reproduce your issue unfortunately. Perhaps other maintainers more familiar with the document can help.

That's is simple question man,it has nothing to do with helm and cloud,basically velero backup pv and pvc resources only and use this backup to restore to the new migrated cluster and the data can be back,do you understanding what i say?by the way,this link article was not written by me,thank you sir~

do you understanding what i say

Honestly, I was trying my best to understand what you mean and may have misunderstood the question.

If I understand your question, you mean to ask if you could backup just the PV/PVC and nothing else (ie. without pods) like the article and restore this PV/PVC data in the new cluster.

Yes but you need to

  1. Not use file system backup.
    1. Use VolumeSnapshotLocation provided by cloud provider plugin, in the article you can see in the velero install screenshot they had created volumesnapshotlocation as well. This method will not move snapshots to a different cloud provider and/or region. For aws, I would try read the entire plugin readme. I think the article you linked is being purposefully vague. You need to follow cloud plugin docs.
    2. Use Velero Built-In Data Mover

do you understanding what i say

Honestly, I was trying my best to understand what you mean and may have misunderstood the question.

If I understand your question, you mean to ask if you could backup just the PV/PVC and nothing else (ie. without pods) like the article and restore this PV/PVC data in the new cluster.

Yes but you need to

  1. Not use file system backup.

    1. Use VolumeSnapshotLocation provided by cloud provider plugin, in the article you can see in the screenshot they had created volumesnapshotlocation as well. This method will not move snapshots to a different cloud provider and/or region. For aws, I would try read the entire plugin readme. I think the article you linked is being purposefully vague. You need to follow cloud plugin docs.velero install
    2. Use Velero Built-In Data Mover

Same like backup and restore steps shown with the link I sent you have been
Also I using backend storage is nfs with nfs-client-provisioner sir

If your nfs client provisioner does not have a CSI driver with snapshot capability then without adding pods to use file system backup you won't be able to migrate PV/PVC data.

If your nfs client provisioner does not have a CSI driver with snapshot capability then without adding pods to use file system backup you won't be able to migrate PV/PVC data.
Yeah Yeah~ It's okay to carry pod resources for backup, but is it necessary to carry pod resources for recovery? Can't recover PVC and persistent data in PV separately?

is it necessary to carry pod resources for recovery?

Yes it is necessary with file system backup. Until such time that velero create a pod for you, your backup will need to contain a pod mounting PV for file system backup to work on these volumes.

is it necessary to carry pod resources for recovery?

Yes it is necessary with file system backup. Until such time that velero create a pod for you, your backup will need to contain a pod mounting PV for file system backup to work on these volumes.

Recovery steps also necessary carry pod resources right you mean ?

yes

yes

ok now i see,good,but offical document information not cleary to explain this one so hope update the recovery documentation is best i think sir

PodVolumeRestore - represents a FSB restore of a pod volume. The main Velero restore process creates one or more of these when it encounters a pod that has associated FSB backups. Each node in the cluster runs a controller for this resource (in the same daemonset as above) that handles the PodVolumeRestores for pods on that node. PodVolumeRestore is backed by restic or kopia, the controller invokes restic or kopia internally, refer to restic integration and kopia integration for details.

Our docs already highlight this.

If more improvement is needed leave a suggestion or PR.