flexvolume driver for NFS in Azure AKS
jmorcar opened this issue · 4 comments
I'm checking to build docker image with flexvolume driver for nfs, like is indicate here:
https://github.com/kubernetes/examples/tree/master/staging/volumes/flexvolume
And I check repo have a nfs example driver , here: https://github.com/kubernetes/examples/blob/master/staging/volumes/flexvolume/nfs
But when I analyze functions of driver, so I don't understand yet the domount() function how build json in NFS_SERVER=$(echo $2 | jq -r '.server') -->
domount() {
MNTPATH=$1
NFS_SERVER=$(echo $2 | jq -r '.server')
SHARE=$(echo $2 | jq -r '.share')
if [ $(ismounted) -eq 1 ] ; then
log '{"status": "Success"}'
exit 0
fi
mkdir -p ${MNTPATH} &> /dev/null
mount -t nfs ${NFS_SERVER}:/${SHARE} ${MNTPATH} &> /dev/null
if [ $? -ne 0 ]; then
err "{ \"status\": \"Failure\", \"message\": \"Failed to mount ${NFS_SERVER}:${SHARE} at ${MNTPATH}\"}"
exit 1
fi
log '{"status": "Success"}'
exit 0
}
What ENV or Source JSON have I configure to indicate NFS Filestorage account for example with Azure ? ... or this nfs example is only valid via NFS Server standard....
Here there is a example driver with Blobfuse Azure containers that is OK:
https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/blobfuse
So How I do to create now a NFS driver in flexvolume to Azure NFSv.4.1 preview service ? (I already registered this function in subscription)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.