Uncertainty regarding caching for multiple different docker images in the same pipeline
Opened this issue · 4 comments
Given the following pipeline, I am having trouble getting it to successfully cache the second publish step. If I use the cache_key option, I get the message 'No cache for /drone/docker' no matter how many times I run it. Am I missing something obvious?
pipeline:
restore-cache-base:
image: drillster/drone-volume-cache
restore: true
environment:
STEP_TAG: 'BASE'
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH, STEP_TAG ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
publish-base:
image: plugins/docker
registry: xxxx
repo: yyyy
dockerfile: Dockerfile.base
tags:
- latest
- ${DRONE_COMMIT_BRANCH}
- ${DRONE_COMMIT_BRANCH}-${DRONE_COMMIT_SHA:0:8}
secrets:
- docker_username
- docker_password
storage_path: /drone/docker
purge: false
rebuild-cache-base:
image: drillster/drone-volume-cache
rebuild: true
environment:
STEP_TAG: 'BASE'
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH, STEP_TAG ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
restore-cache-zerg:
image: drillster/drone-volume-cache
restore: true
environment:
STEP_TAG: 'ZERG'
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH, STEP_TAG ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
publish-zerg:
image: plugins/docker
registry: xxxx
repo: zzzz
dockerfile: Dockerfile.zerg
tags:
- latest
- ${DRONE_COMMIT_BRANCH}
- ${DRONE_COMMIT_BRANCH}-${DRONE_COMMIT_SHA:0:8}
secrets:
- docker_username
- docker_password
storage_path: /drone/docker
purge: false
rebuild-cache-zerg:
image: drillster/drone-volume-cache
rebuild: true
environment:
STEP_TAG: 'ZERG'
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH, STEP_TAG ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
It doesn't work if you merge the cache steps like so?
pipeline:
restore-cache:
image: drillster/drone-volume-cache
restore: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
publish-base:
image: plugins/docker
registry: xxxx
repo: yyyy
dockerfile: Dockerfile.base
tags:
- latest
- ${DRONE_COMMIT_BRANCH}
- ${DRONE_COMMIT_BRANCH}-${DRONE_COMMIT_SHA:0:8}
secrets:
- docker_username
- docker_password
storage_path: /drone/docker
purge: false
publish-zerg:
image: plugins/docker
registry: xxxx
repo: zzzz
dockerfile: Dockerfile.zerg
tags:
- latest
- ${DRONE_COMMIT_BRANCH}
- ${DRONE_COMMIT_BRANCH}-${DRONE_COMMIT_SHA:0:8}
secrets:
- docker_username
- docker_password
storage_path: /drone/docker
purge: false
rebuild-cache:
image: drillster/drone-volume-cache
rebuild: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
This is what it does when they each have separate restore-cache, rebuild-cache steps, after a single build has already been done:
clone 00:02
restore-a-cache 00:03
publish-a 00:06
rebuild-a-cache 00:01
restore-b-cache 00:01
publish-b 00:38 <-- bad step! why isnt it cached?
rebuild-b-cache 00:01
This is what it does when they're merged into a single step after a single build has already been done, as you did above:
clone 00:01
restore-cache 00:03
publish-a 00:04
publish-b 00:36 <-- bad step! why isnt it cached?
rebuild-cache 00:01
Separate cache steps, .drone.yml file:
pipeline:
restore-a-cache:
image: drillster/drone-volume-cache
restore: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
when:
status: success
event: push
branch: master
local: false
publish-a:
image: plugins/docker
repo: test-a
registry: harbor
dockerfile: docker/Dockerfile.a
force_tag: true
tags:
- latest
- 2018.10.1-${DRONE_BUILD_NUMBER}-${DRONE_COMMIT_SHA:0:8}
secrets:
- docker_username
- docker_password
storage_path: /drone/docker
purge: false
when:
status: success
event: push
branch: master
local: false
rebuild-a-cache:
image: drillster/drone-volume-cache
rebuild: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
when:
status: success
event: push
branch: master
local: false
restore-b-cache:
image: drillster/drone-volume-cache
restore: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
when:
status: success
event: push
branch: master
local: false
publish-b:
image: plugins/docker
repo: test-b
registry: harbor
dockerfile: docker/Dockerfile.b
force_tag: true
tags:
- latest
- 2018.10.1-${DRONE_BUILD_NUMBER}-${DRONE_COMMIT_SHA:0:8}
when:
status: success
event: push
branch: master
local: false
secrets:
- docker_username
- docker_password
rebuild-b-cache:
image: drillster/drone-volume-cache
rebuild: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
when:
status: success
event: push
branch: master
local: false
Same cache step, .drone.yml file:
pipeline:
restore-cache:
image: drillster/drone-volume-cache
restore: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
when:
status: success
event: push
branch: master
local: false
publish-a:
image: plugins/docker
repo: test-b
registry: harbor
dockerfile: docker/Dockerfile.a
force_tag: true
tags:
- latest
- 2018.10.0-${DRONE_BUILD_NUMBER}-${DRONE_COMMIT_SHA:0:8}
secrets:
- docker_username
- docker_password
storage_path: /drone/docker
purge: false
when:
status: success
event: push
branch: master
local: false
publish-b:
image: plugins/docker
repo: test-b
registry: harbor
dockerfile: docker/Dockerfile.b
force_tag: true
tags:
- latest
- 2018.10.0-${DRONE_BUILD_NUMBER}-${DRONE_COMMIT_SHA:0:8}
when:
status: success
event: push
branch: master
local: false
secrets:
- docker_username
- docker_password
rebuild-cache:
image: drillster/drone-volume-cache
rebuild: true
cache_key: [ DRONE_REPO_OWNER, DRONE_REPO_NAME, DRONE_BRANCH ]
mount:
- /drone/docker
volumes:
- /tmp/cache:/cache
when:
status: success
event: push
branch: master
local: false
Let me know if you need any more information!
Interesting. Logically there is no reason why this shouldn't work... This also leads me to believe this plugin is not at fault here, it may be the Docker plugin or even Docker itself. I know there's also an issue with multi-stage Dockerfiles and caching. I'll have to invest some time to figure out what's going on and why it's not working.