digitalocean/csi-digitalocean

Support Docker Swarm

drozzy opened this issue ยท 17 comments

Has anyone used this with docker swarm mode?
Will it work and if yes, could someone provide an example in a stack file?

Thank you very much.

fatih commented

Unfortunately, there are no docker swarm mode examples. Currently, our main focus is to support Kubernetes. I'll update the README.md in the future for other CO's. Thanks!

A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage. The CSI plugin allows you to use DigitalOcean Block Storage with your preferred Container Orchestrator.

Maybe replace with: "with Kubernetes".

fatih commented

Sure, but this doesn't main that it's not meant to be used with others. If any other CO supports the CSI plugin, somehow is still able to use it. What I mean is that our focus is currently supporting Kubernetes, it doesn't mean the CSI is only meant for Kubernetes. Hope this clarifies it better :) Thanks again for the feedback

I just wish you supported Docker Swarm Mode (it is way better than old swarm, and much easier to learn than K8s).
Thanks for the explanation.

FYI. Docker 23.0.0 added CSI support for Swarm mode so unless this plugin uses Kubernetes specific implementation (works as controller, etc) it should be possible to make it working with Docker Swarm now.

I have build some example scripts and guidance to https://github.com/olljanat/csi-plugins-for-docker-swarm which you might find useful.

Hi @olljanat,

thanks for the update. The DigitalOcean CSI driver should not come with any Kubernetes-specific implementations (unless we accidentally introduced some, which should be considered a bug). We do use some library functionality from the Kubernetes ecosystem but they should not affect compatibility (e.g., mount helpers).

FWIW, we had a Nomad users trying out the DigitalOcean driver some time ago, which has led to an upstream Nomad ticket being filed to get some CSI spec non-compliance fixed on their end. I haven't heard back from them since then, which makes me assume our driver should generally be usable with container orchestrators other than Kubernetes.

Our main usage is definitely Kubernetes as described earlier in this issue; if anyone spots a problem possibly related to the DigitalOcean driver not behaving in accordance with the spec, however, please do file a new issue on our repo. (End user support pertaining to the configuration or usage of the driver in other orchestrators is definitely out of scope though and should be directed towards the respective orchestrator's community and channels.)

s4ke commented

For anyone considering picking this up: Even though I don't have any digitalocean servers, I think porting the CSI driver to Swarm should be doable. For the Hetzner CSI this meant that we only had to bundle the controller and node binary into a single all in one binary and then create the tooling to build a docker plugin.

You can find the PR over at hetznercloud/csi-driver#376
and a preliminary discussion over at hetznercloud/csi-driver#374

s4ke commented

@timoreimann Docker Swarm plugins have to be packaged specifically for use with Swarm, so it would be great for digitalocean users who want to try this if they can use an official source. Would you accept PRs that add the necessary tooling to build sucha plugin if anyone is interested in working on the CSI support for Swarm?

Hey @s4ke, thanks for the offer. Two quick questions:

  1. Can you (or anyone else active in this issue right now) confirm that you currently have the need to run the DO CSI driver in Docker Swarm? (Basically trying to confirm here that we have a real use case.)
  2. Can you roughly outline the work required to support Docker Swarm with the DO CSI driver? (For what it's worth, the Controller and Node service portions already live in the same binary, though it might not be possible to run them at the same time right now.)
s4ke commented

@timoreimann to be clear. We don't use digitalocean right now so I can't do the work. I can assist with packaging questions as can @olljanat.

AFAICS, the last user in this discussion who expressed direct interest in seeing Docker Swarm support was from more than 4 years ago. I'd prefer to see a real world need expressed before committing to any work. At that point though, I'm generally open to external contributions.

I would be interested as well. Previously I wasn't aware that there could be any chance for CSI compatibility with Swarm so I developed my own minimalistic (non-CSI) plugin in the meantime. But it would very helpful to have a robust and centrally supported CSI alternative. Also, would be willing to test.

Thanks @djmaze.

The next step should be to outline the work that needs to be done. This will allow us to ensure it fits with the existing CSI driver structure. Afterwards, any volunteer(s) may drive the work.

Like mentioned in olljanat/csi-plugins-for-docker-swarm#14 (comment) I would like to see Github action template which can be copied to any CSI projects with small modifications.

Looks that e2e tests for Kubernetes already exists in

- name: Run end-to-end tests
env:
DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.CSIDigitalOceanAccessToken }}
# ${{ github.head_ref }} will be empty for pull requests and non-empty
# for pushes. Handle the cases in "run" below to parameterize our e2e
# test invocation accordingly.
BRANCH: ${{ github.head_ref }}
# Use less ginkgo nodes to avoid running into the DO API rate limit.
# The upstream end-to-end tests are quite demanding in terms of API
# request volume, and we run them concurrently.
NUM_GINKGO_NODES: "8"
run: |
BRANCH=$(echo -n ${BRANCH} | tr -c '[:alnum:]._-' '-')
NAME_SUFFIX="${BRANCH}"
TAG="${BRANCH}"
if [[ $BRANCH != "" ]]; then
# Hash name suffix which goes into the cluster name to ensure we do
# not exceed any name constraints.
NAME_SUFFIX=$(echo -n ${BRANCH} | sha256sum | cut -c1-7)
RUNNER_IMAGE_TAG_PREFIX=${BRANCH}-
else
NAME_SUFFIX=master
TAG=latest
fi
TIMEOUT=60m make test-e2e E2E_ARGS="-ginkgo-nodes ${NUM_GINKGO_NODES} -driver-image ${DOCKER_ORG}/do-csi-plugin-dev:${TAG} -runner-image ${DOCKER_ORG}/k8s-e2e-test-runner:${RUNNER_IMAGE_TAG_PREFIX}latest -name-suffix ${NAME_SUFFIX} -retain ${{ matrix.kube-release }}"
and there looks to be already documentation about how to run those on dev environment https://github.com/digitalocean/csi-digitalocean#end-to-end-tests so I will at least start by reading those.

@djmaze also I'm interested that if that your non-CSI plugin is available on somewhere? It might helpful too.

To make sure we're on the same page: bundling and possibly refactoring the driver for compatibility with other container orchestrators is something I'd think should come with relatively low effort (in terms of initial and repeated maintenance work). However, we likely cannot maintain end-to-end tests for orchestrators other than Kubernetes in this repository given DigitalOcean's focus is on Kubernetes. My concern is around scenarios where complications with Docker could block progress / releasing for Kubernetes users when we may have no one available to really understand the specifics of other orchestrators.

I can theoretically see a possibility where an external contributor (such as one of you involved in this discussion) could function as Docker maintainers. Practically speaking, the priority on the Kubernetes use case would still be given along with implications that aren't very appealing when the Docker Swarm part fails in some regard (e.g., leaving Docker tests broken, not releasing for Docker, etc.)

Chances are you imagine the amount of involvement to some other extent or in a way that addresses my concerns. Let me know if that's the case.

To make sure we're on the same page: bundling and possibly refactoring the driver for compatibility with other container orchestrators is something I'd think should come with relatively low effort (in terms of initial and repeated maintenance work).

Yes and that should be the case as long both this CSI driver and Docker do things rights on their end with CSI spec compatibility.

My concern is around scenarios where complications with Docker could block progress / releasing for Kubernetes users when we may have no one available to really understand the specifics of other orchestrators.

IMO right way to handle those cases is skip releasing for Docker until it is fixed and just make sure that existing versions keeps working (=> APIs needed by those cannot change, etc). So I would like to see test set which is good enough to tell if version is good to be released for Docker but probably it does not need to be end to end test but more like integration test.

I can theoretically see a possibility where an external contributor (such as one of you involved in this discussion) could function as Docker maintainers.

Something like that most likely would works. Those don't need to be even official maintainers but more like persons who can be pinged when there questions about these topics.

Chances are you imagine the amount of involvement to some other extent or in a way that addresses my concerns. Let me know if that's the case.

So far I have seen very little CSI driver specific needs when I tested to packaging those for Docker Swarm so it should not be too hard to maintain some generic toolkit, etc which can be used also in here too. And if we manage to do so then right place to maintain it would be some Git repo on moby organization.

PS. Draft version of this CSI driver packaging scripts for Docker Swarm exists now in olljanat/csi-plugins-for-docker-swarm#15

@olljanat

@djmaze also I'm interested that if that your non-CSI plugin is available on somewhere? It might helpful too.

https://github.com/djmaze/dobs-volume-plugin