docker images are not updated anymore
chatziparaskewas opened this issue · 6 comments
Describe the bug
A fix for the Log4Shell vulnerability has been applied, e.g. against the release-3.11 branch, however, on quay.io (and elsewhere) the latest 3.11 elasticsearch5 docker image does not contain the fix. Instead images are built every so many hours (multiple times a day), but in the end the same docker images are kept. I think the 3.11 docker images are the same since over a year.
Environment
- n/a
Logs
n/a
Expected behavior
Changes on the 3.11 branch are eventually in a reasonable timeframe reflected in the quay.io docker images.
Actual behavior
Changes on the 3.11 branch are not reflected in the quay.io docker images.
To Reproduce
Steps to reproduce the behavior:
- docker pull quay.io/openshift/origin-logging-elasticsearch5:v3.11.0
- docker run -it --rm --entrypoint=/bin/bash quay.io/openshift/origin-logging-elasticsearch5:v3.11.0
- check /opt/app-root/src/run.sh for the fix "-Dlog4j2.formatMsgNoLookups=true" (which is missing)
Additional context
none
(I'm not one of contributors, just noticed this as an OKD user)
As you can see there is a red X next to the commit: d119820
Which means some of the steps failed during the image build, which you can see here:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/branch-ci-openshift-origin-aggregated-logging-release-3.11-images/1471476743117213696
That's why the image doesn't contain the fixed code, because it doesn't build correctly.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
@sosiouxme do you know if we are pushing or maintaining any "upstream" images for 3.11? My suspicion is most places we are only building product images for customers?
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.