--pull=always does not work with local images
GrabbenD opened this issue ยท 17 comments
Issue Description
--pull=always
flag in $ podman build
is not compatible with locally built images. Locally built images are automatically prefixed with localhost/
which makes --pull=always
think it's a locally hosted repository.
This does not seem to be the case with --pull=newer
as it doesn't complain about network failure.
Related discussion: containers/podman#20121
Steps to reproduce the issue
Steps to reproduce the issue
$ less Containerfile.base
FROM archlinux:base
$ podman build -f Containerfile.base -t mylocalimage --pull=always
Successfully tagged localhost/mylocalimage:latest
$ less Containerfile.extended
FROM mylocalimage
$ podman build -f Containerfile.extended --pull=always
WARN[0000] Failed, retrying in 2s ... (1/3). Error: initializing source docker://localhost/mylocalimage:latest: pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused
$ podman image ls
(Not related to this issue but I don't know why it thinks it was created 3 days ago when I just made itREPOSITORY TAG IMAGE ID CREATED SIZE localhost/mylocalimage latest 74269d97bbd7 3 days ago 448 MB docker.io/library/archlinux base 74269d97bbd7 3 days ago 448 MB
$ date
(Mon Sep 25 09:13:50 AM UTC 2023))
Describe the results you received
--pull=always
pings localhost instead of querying local images
Describe the results you expected
--pull=always
should realize the image is available locally instead of pinging localhost
Understandably the best workaround is to not specify --pull=always
(as Podman automatically uses the newest image for locally built images) but this breaks my CI/CD workflow which uses the same parameters for building multiple Containerfiles with the same flags in a loop
podman info output
host:
arch: amd64
buildahVersion: 1.31.2
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- rdma
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: /usr/bin/conmon is owned by conmon 1:2.1.8-1
path: /usr/bin/conmon
version: 'conmon version 2.1.8, commit: 00e08f4a9ca5420de733bf542b930ad58e1a7e7d'
cpuUtilization:
idlePercent: 99.89
systemPercent: 0.08
userPercent: 0.03
cpus: 32
databaseBackend: boltdb
distribution:
distribution: arch
version: 20230921.0.180222
eventLogger: journald
freeLocks: 2015
hostname: ostree
idMappings:
gidmap: null
uidmap: null
kernel: 6.5.4-arch2-1
linkmode: dynamic
logDriver: journald
memFree: 27389308928
memTotal: 33651609600
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: Unknown
package: /usr/lib/podman/netavark is owned by netavark 1.7.0-1
path: /usr/lib/podman/netavark
version: netavark 1.7.0
ociRuntime:
name: crun
package: /usr/bin/crun is owned by crun 1.9-1
path: /usr/bin/crun
version: |-
crun version 1.9
commit: a538ac4ea1ff319bcfe2bf81cb5c6f687e2dc9d3
rundir: /run/user/0/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: ""
package: ""
version: ""
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.2-1
version: |-
slirp4netns version 1.2.2
commit: 0ee2d87523e906518d34a6b423271e4826f71faf
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 0
swapTotal: 0
uptime: 2h 52m 13.00s (Approximately 0.08 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries: {}
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 33
paused: 0
running: 0
stopped: 33
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev
graphRoot: /var/lib/containers/storage
graphRootAllocated: 26506952704
graphRootUsed: 16536727552
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 47
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.6.2
Built: 1693343961
BuiltTime: Tue Aug 29 21:19:21 2023
GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178-dirty
GoVersion: go1.21.0
Os: linux
OsArch: linux/amd64
Version: 4.6.2
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
N/A
Additional information
N/A
See comment: containers/podman#20125 (comment) I think it should help.
Thanks, that makes sense @flouthoc!
I think something is wrong through:
$ podman build -f Containerfile.extended --pull=true
STEP 1/1: FROM mylocalimage
Trying to pull localhost/mylocalimage:latest...
WARN[0000] Failed, retrying in 2s ... (1/3). Error: initializing source docker://localhost/mylocalimage:latest: pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused
After reading PR changes here containers/podman#20124 I think maybe buildah documentation is also not correct, lets wait for comments on PR
--pull=always
on an image with localhost
is not strictly related to the docs issue. What it boils down to is: Do we want to "relax" the always
pull policy when the reference points to localhost/
?
I feel rather against it, because --pull=always
instructs to always pull. Podman has no knowledge whether the user is referencing a local-only image or not. So I prefer users who decide for --pull=always
to make a conscious decision about it.
Note that I have containers/podman#20124 open
Do we want to "relax" the always pull policy when the reference points to localhost/?
I feel rather against it, because --pull=always instructs to always pull. Podman has no knowledge whether the user is referencing a local-only image or not. So I prefer users who decide for --pull=always to make a conscious decision about it.
I agree with your assessment
What about a new relaxed
option which only forcefully pulls external images but not localhost/
?
Localhost images are already up to date as they're built locally in the first place ๐
(This is probably only useful for people like me who would like to re-use the same Podman options for building local and external images)
@GrabbenD, did you checkout --pull=newer
? This will always pull an image if there's a newer one on the registry. If the registry cannot be reached, the local one would be used.
@vrothberg Tried it but there's no logs which confirms that my current (external) image is up to date with upstream repo (which makes me confused if it works as the build completes instantaneously) and the documentation advised against newer
option as comparing timestamps is prone to errors? Maybe I'm overthinking it? ๐
@GrabbenD I think the docs may have confused you. It states that "Comparing the time stamps is prone to errors." but newer
is not comparing those but "An image is considered to be newer when the digests are different".
In other words: an image in the local storage will only repulled if the digest on the registry differs.
but there's no logs which confirms that my current (external) image is up to date with upstream repo
Which kind of logs (and where) are you looking for?
That makes sense, yes the documentation was confusing.
Which kind of logs (and where) are you looking for?
For comparison, console logs from $ podman build -f Containerfile.base --pull=always
with external image makes it clear that something is going on:
Copying blob 38f5a258dd4f skipped: already exists
In contrast, --pull=newer
doesn't print anything to the console at all (which is the same behavior as --pull=missing
option or --pull=invalidoptionhere
, there's no indication that it works or if something is wrong).
You would like a Debug message saying the image was not pulled?
You would like a Debug message saying the image was not pulled?
Yes, I believe it would be very helpful in seeing what's actually going on (e.g. if something is misspelled) since right now you can't the difference when it's working and when the flag is invalid
(p.s. sorry for the late reply)
A friendly reminder that this issue had no activity for 30 days.
(Ticket is still relevant @github-actions)
A friendly reminder that this issue had no activity for 30 days.
(@github-actions bump)