Tooling to manage image caches a bit better
Jc2k opened this issue · 1 comments
If your Dockerfile produces a large image and you are iterating and running your integration tests often then you might find a lot of stale images in docker image ls
. But all your developer images are untagged, so if you prune then you lose the cached layers that weren't actually stale!
My first attempt at resolving this was to tag the image with a name:
app_image = build(
path=os.path.join(os.path.dirname(__file__), '..'),
tag="localhost/pytest/project:dev",
)
This worked - sort of. docker image prune
did not prune localhost/pytest/project:dev
. But it was a multi stage image, and the stages the default target depends on did get pruned. This meant 1GB image full of dev tools had to be rebuilt.
The 2nd attempt was to manually build each target, and use dummy build args to make a dependency graph that was appropriate.
app_builder_image = build(
path=os.path.join(os.path.dirname(__file__), '..'),
target="builder",
tag="localhost/pytest/project:builder"
)
app_image = build(
path=os.path.join(os.path.dirname(__file__), '..'),
tag="localhost/pytest/project:dev",
# This is just a hack to setup the dep graph
buildargs={
'dummy1': '{app_builder_image.id}',
}
)
So pytest-docker-tools would build the builder
target of my multi stage Dockerfile, tag it as localhost/pytest/project:builder
, then build the default target and tag it as localhost/pytest/project:dev
(the buildargs
trick it into getting the dependency ordering right). This does work. docker image prune
now leaves the latest image (and any build images) alone.
But it's also kind of gross.
At a minimum, maybe making a "proper" depends_on
kwarg would be better.
Or, can we make a multi-stage aware variant of build
? Could this be as simple as:
multi_stage_build(
...,
tags={
None: 'localhost/pytest/project:dev',
'builder': 'localhost/pytest/project:builder',
}
)
In latest tag.