Slow operations and cannot remove volume
GitHK opened this issue · 2 comments
- Plugin version (or commit ref) : rclone:latest (ID from docker inspect
4ed13fa184fb0b79afa2a5d2f301764a1705ee7e8bb108c473726abfe4185085
PluginReferencedocker.io/rclone/docker-volume-rclone:amd64
) - Docker version : 20.10.7, build f0df350
- Plugin type : legacy/managed
- Operating system: ubuntu 20.04
Description
I create a service which spawns some volumes managed via rclone. I use CEPH as an S3 backend.
When creating the volume the options look something like this:
"VolumeOptions": {
"DriverConfig": {
"Name": "rclone",
"Options": {
"allow-other": "true",
"dir-cache-time": "10s",
"path": "master-simcore/e5751e46-8f09-11ec-a814-02420a041bec/50b5e822-3c89-4809-aebe-03302ed656a6/home_jovyan_work_workspace",
"poll-interval": "9s",
"s3-access_key_id": "****************",
"s3-endpoint": "https://ceph_endpoint_address",
"s3-location_constraint": "",
"s3-provider": "Minio",
"s3-region": "us-east-1",
"s3-secret_access_key": "****************",
"s3-server_side_encryption": "",
"type": "s3",
"vfs-cache-mode": "minimal"
}
}
I've just noticed that the s3-provider
is set to Minio
and not Ceph
. Maybe this is what is causing all the issues.
Below operation take a very long time each
$ docker volume ls
DRIVER VOLUME NAME
rclone:latest dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace
....
$ docker volume inspect dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace
[]
Error response from daemon: get dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace: error while checking if volume "dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace" exists in driver "rclone:latest": Post "http://%2Frun%2Fdocker%2Fplugins%2F4ed13fa184fb0b79afa2a5d2f301764a1705ee7e8bb108c473726abfe4185085%2Frclone.sock/VolumeDriver.Get": context deadline exceeded
$ docker volume rm -f dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace
Error response from daemon: get dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace: error while checking if volume "dy-sidecar_1cd462bc-b958-467f-ac61-09d2faad6b07_home_jovyan_work_workspace" exists in driver "rclone:latest": Post "http://%2Frun%2Fdocker%2Fplugins%2F4ed13fa184fb0b79afa2a5d2f301764a1705ee7e8bb108c473726abfe4185085%2Frclone.sock/VolumeDriver.Get": context deadline exceeded
How would I remove a volume in such a situation?
Logs
Tests
I can provide more context now
I have changed the s3-provider and that issue is still there.
I can reproduce this reliably. I have 10 volumes attached to 10 different services. When I remove the 10 docker swarm services and also delete the networks I get into the above situation. I try to remove the volume as soon as the service is removed.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.