admiraltyio/admiralty

Problem mounting persistent volumes in Admiralty

simonbonnefoy opened this issue · 2 comments

Hello,

I have been trying to mount persistent volumes in Admiralty.
However, I encountered an odd situation.
I have tested the PersistentVolumeClaim without multicluster-scheduler and everything works well.
My situation is the following: I have two clusters running on Google Kubernetes Engine. One source cluster (cluster-cd) and one target cluster (cluster-1).
I have created 2PersistentVolumeClaims on each cluster:
cluster-cd -> pvc-cd and pvc-demo
cluster-1 -> pvc-1 and pvc-demo

Note that the pvc-demo do not point to the same PersistantVolume. Just the name is the same.
Also, I use the following jobs to test them (extracted from the quick start guide)

apiVersion: batch/v1
kind: Job
metadata:
  name: admiralty-pvc-test
spec:
  template:
    metadata:
      annotations:
        multicluster.admiralty.io/elect: ""
    spec:
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: pvc-demo
      containers:
      - name: c
        image: busybox
        command: ["sh", "-c", "echo hello world: && echo hello world >> /mnt/data/hello.txt && ifconfig && df -h"]
        # command: ["sh", "-c", "cat /mnt/data/hello.txt && ifconfig && echo ------- && df -h"]
        volumeMounts:
          - mountPath: "/mnt/data/"
            name: task-pv-storage
        resources:
          requests:
            cpu: 100m
      restartPolicy: Never

Case 1

I set claimName: pvc-cd (pvc on the source cluster).
The pods stay in Pending status (in source and target clusters) and the pod description in the source cluster context gives me the following error.

Warning  FailedScheduling  44s (x3 over 111s)  admiralty-proxy  0/4 nodes are available: 1 , 3 node(s) didn't match node selector.

Case 2

I set claimName: pvc-1 (pvc on the target cluster).
The pods stay in Pending status (only in the source cluster this time, pods does not even show up in target cluster).
The pod description in the source cluster context gives me the following error.

Warning  FailedScheduling  48s (x3 over 118s)  admiralty-proxy  persistentvolumeclaim "pvc-1" not found

Case 3

I set claimName: pvc-demo (pvc that exists on both cluster, but refers to different locations)
In this case, it seems to be working. However, the command echo hello world >> /mnt/data/hello.txtis written
on the pvc of the target clusters.

Conclusion

I understand the behavior in the 3 cases. However, is there a way in Admiralty to use the PersistentVolumeClaims? I am interested in them in a perspective of plugging them on some Argo workflow to produce input and output set of data.
Is there a good way to do that with Admiraly/Argo. Or should I use buckets ?
I have not found specifications regarding that matter in the documentation. But maybe have I overlooked something.

Thanks in advance!

Hi @simonbonnefoy , PVCs and PVs aren't specifically supported yet. As you saw, you had to copy pvc-demo to the target cluster for scheduling to work (Admiralty didn't make the one in the source cluster "follow"). Then the two pvc-demos gave birth to two independent PVs referring to different Google Persistent Disks.

  • Would you like them to refer to the same disk? What if the clusters are in different regions? That may confuse the CSI driver.
  • Would you like them to refer to different disks with data replication? You'd need a 3rd-party CSI driver.

I would recommend using buckets for your use case. Here's an equivalent demo on AWS (at 00:10:20): https://zoom.us/rec/share/0Ve24HgWnCkz474Q84wD2LgjXtO4UHSB_Bp1vJwrMf0lSXucBQoK4xKcz7qx63Pz.ZvD7hs0H9b0SxXLW

Hi @adrienjt

Thanks a lot for your reply! In the end I could make it work using GCS buckets.