support for nomad
Opened this issue · 10 comments
I try to use cvmfs
through CSI Plugin
in nomad
, but I encounter problems when creating volume
it seems that the access_mode
parameter configured in nomad is not supported
here is my volume
config file:
type = "csi"
id = "cvmfs-volume"
name = "cvmfs-volume"
plugin_id = "cvmfs0"
capability {
access_mode = "multi-node-reader-only"
attachment_mode = "file-system"
}
mount_options {
fs_type = "cvmfs2"
}
secrets {}
the error info:
root@ubuntu:~/nomad-jobs# nomad volume create cvmfs.volume.hcl
Error creating volume: Unexpected response code: 500 (1 error occurred:
* controller create volume: CSI.ControllerCreateVolume: volume "cvmfs-volume" snapshot source &{"" ""} is not compatible with these parameters: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported)
Can you provide an example of using cvmfs csi
in nomad
?
Ref: #51
Hi @shumin1027. It seems this error originates from having non-nil AccessibilityRequirements (i.e. topology) in CreateVolumeRequest rather than AccessMode.
cvmfs-csi/internal/cvmfs/controller/csiserver.go
Lines 205 to 207 in 18097c8
I don't have Nomad environment at hand so I cannot test this. Can you pass logs from the controller plugin (running with -v=5
verbosity level) to see what Nomad's CSI client is passing the driver? Is there a way to pass nil topology requirements when creating a volume?
Another way to do this is to create the volume manually (similar to how you would manually create a PersistentVolume and its PersistentVolumeClaim in Kubernetes). Is this possible in Nomand? That way you would circumvent the provisioning stage.
You can see a Kubernetes example for this here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/example/volume-pv-pvc.yaml
@gman0
The example given above is to manually create volume
nomad volume create cvmfs.volume.hcl
This is the log output by the controller plugin when creating a volume manually:
I1205 13:47:16.773803 1 grpcserver.go:136] Call-ID 3443: Call: /csi.v1.Controller/CreateVolume
I1205 13:47:16.774096 1 grpcserver.go:137] Call-ID 3443: Request: {"accessibility_requirements":{},"name":"cvmfs-volume","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"cvmfs2"}},"access_mode":{"mode":3}}]}
E1205 13:47:16.774136 1 grpcserver.go:141] Call-ID 3443: Error: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported
What I meant is to just register an existing volume without actually triggering CreateVolume
call on the driver. I'm not familiar with Nomad so I'm not sure if it's possible.
The volume itself is "virtual", it's just a reference for cvmfs-csi.
Relaxing validation and letting it pass on "accessibility_requirements":{}
doesn't seem too unreasonable -- we may include this change in the next point release.
What I meant is to just register an existing volume without actually triggering
CreateVolume
call on the driver. I'm not familiar with Nomad so I'm not sure if it's possible.The volume itself is "virtual", it's just a reference for cvmfs-csi.
@gman0
Execute the register command can be completed correctly:
nomad volume register cvmfs.volume.hcl
csi plugin node
can correctly access the content on cvmfs
, but it seems that it cannot be automatically mounted in the application container
JuiceFS
gives a good support use case,we can use it as a reference:
https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/cookbook/csi-in-nomad.md
csi plugin node can correctly access the content on cvmfs, but it seems that it cannot be automatically mounted in the application container
Is there an error message we could troubleshoot? My first guess would be missing rslave
or rshared
in the container mount (HostToContainer
mount propagation in Kubernetes terminology)? See Example: Automounting CVMFS repositories and a Pod definition example.
I'm not sure I understood correctly, but cvmfs-csi doesn't distinguish between "mount-by-pod" and "mount-by-process". The cvmfs-csi node plugin needs to be already running on all nodes of the cluster that are expected to use CVMFS volumes (DaemonSet in Kubernetes terminology).
@gman0 Thank you for your help,your guess might be right,nomad
seems to be temporarily unsupported mount-propagation
Thanks for following this up, @shumin1027. We can continue once this is resolved in Nomad.
@gman0
I fixed this issue: https://github.com/hashicorp/nomad/issues/15524, the mount-propagation
option can be successfully set when mounting the volume
When I use the host volume
and directly mount '/cvmfs' on the host into the container and set mount-propagation
to rslave
, everything is as expected
But when I use the cvmfs-csi volume
and set mount-propagation
to rslave
, it still not automatically installed in the application containerthe application container