bug: images cannot be forcefully removed if the dataset has been unintentionally destroyed by hand
Closed this issue · 2 comments
Steps to reproduce
Prerequisites
- No existing images and containers (optional, but recommended for easier identification of the pulled image in the following steps)
- ZFS dataset
zroot/containers
is mounted on/var/db/containers
Steps
# Replace <image> with any other images, such as docker.io/dougrabson/hello
$ podman pull <image>
# Check the dataset name created under `zroot/containers`
$ zfs list -d 1 zroot/containers
# Manually destroy dataset
$ zfs destroy zroot/containers/<dataset_name>
# Error may occur; this might be expected
$ podman images <image>
# Manually remove the corrupted image
$ podman rmi <image> --force
# Error reproduces here - is this an expected consequence?
$ podman images <image>
Note: For recovery, you can recreate the dataset with the same name, and then remove the image with Podman.
Expected behavior
I’m unsure if this is the intended behavior, but I expected that the image could be forcefully removed, even after the dataset was destroyed.
I have noticed this behaviour in the past when I've managed to confuse the zfs storage driver while trying to debug something and worked around it as you suggest by creating an empty dataset with the right name.
I'm not sure if this is a common problem - barring some bug in the storage layer which removes a dataset without adjusting the metadata to match, this can only be caused by someone manually deleting a dataset which is managed by podman or buildah. I do think the zfs storage driver could be fixed to allow deleting the affected image even if the dataset is gone.
This might resolve the issue: containers/storage/pull/2123