kubernetes-sigs/aws-ebs-csi-driver

Support Amazon EBS local snapshots on Outposts

smohandass opened this issue · 7 comments

/kind bug

What happened?

According to this AWS EBS CSI driver supports creating EBS volumes on worker nodes running in AWS Outposts subnets.

When creating the snapshot using the volumeSnapshot CRD, the snapshot is created on the AWS region and not on the Outpost.

According to AWS link here , the snapshot can be created on the outpost by passing the outpost ARN. The volumesnapshot CRD however doesn't have a feature to specify the ARN.

What you expected to happen?
I expect the EBS CSI driver to provide a way to specify the Outpost ARN.

How to reproduce it (as minimally and precisely as possible)?

Create a EKS cluster with worker node on Outpost
Deploy a k8s application with a dynamically provisioned EBS volume running on Outpost.
Create a snapshot using the volumeSnapshot CRD and you can notice the snapshot is created in the region and NOT on Outpost

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version): v1.29.4
  • Driver version: 1.32.0

/kind feature
/retitle Support Amazon EBS local snapshots on Outposts

Hi, thanks for the feature request - I've added it to our backlog for evaluation. I can't promise any specific ETA at this time but we'll provide updates on this issue as they are available.

@ElijahQuinones Is the fix available as part of the AWS EBS CSI Driver add on to test?

@smohandassk10 It will be available as part of the AWS EBS CSI Driver September release.

@ElijahQuinones I do see that a new release v1.35.0 was out today. I tried to test the creation of volumeSnapshot on local outpost and didn't succeeded. Below are the steps, I followed

  1. Created a EKS cluster with control plane on region and nodegroup on outpost.
  2. Deployed the latest version (v1.35.0) of EBS CSI driver using helm chart.
  3. Deployed the latest version (v8.0.1) of the external snapshotter from https://github.com/kubernetes-csi/external-snapshotter
  4. Created a VolumeSnapshotClass and added the outpostarn parameter.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-aws-vsc
driver: ebs.csi.aws.com
deletionPolicy: Delete
parameters:
  outpostarn: {arn of the outpost}
  1. Created a storage class using gp2 and deployed a test workload.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: ebs-sc
parameters:
  type: gp2
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
  1. Created a volumesnapshot of the PVC

The volumeSnapshot is stuck in READYTOUSE as false and events show - Waiting for a snapshot test-workload/ebs-claim-snapshot to be created by the CSI driver.

The snapshot-controller logs doesn't show any useful message. Attached the logs
snapshot-controller-5cbf95f44-mh6qd.log

Am i missing anything with the setup/configuration?

Hi @smohandass, please ensure the snapshot CRDs are also installed:

kubectl get volumesnapshotclasses.snapshot.storage.k8s.io

If you see this output:

error: the server doesn't have a resource type "volumesnapshotclasses"

you will need to install the snapshot CRDs, see the relevant doc here for instructions.

I have installed the snapshot CRDs (Step 3 in my above post) and also created the VolumeSnapshotClass resource with the outpostarn.

@smohandass Can you confirm that the csi-snapshotter container sidecar is deployed, if so please provide the logs:

kubectl logs ebs-csi-controller-794b68f64c-d4z9j -n kube-system -c csi-snapshotter

you'll need to replace ebs-csi-controller-794b68f64c-d4z9j with the appropriate controller pod name.