openshift/cluster-logging-operator

must-gather collections docs don't work on power and z

sferich888 opened this issue · 4 comments

https://github.com/openshift/cluster-logging-operator/tree/master/must-gather#usage suggest using quay.io/openshift-logging/cluster-logging-operator:latest as the image / version to use to pull diagnostics information.

This image doesn't seem to be multi-arch capable; like the downstream image is (see:
https://docs.openshift.com/container-platform/4.8/support/gathering-cluster-data.html#gathering-data-specific-features_gathering-cluster-data)

This can lead to issues if this 'documentation' is referenced or used, on a Power or Z cluster.

To Reproduce
Steps to reproduce the behavior:

  1. oc adm must-gather --image=quay.io/openshift-logging/cluster-logging-operator:latest -- /usr/bin/gather
  • On a Power or Z cluster

We might simply put a notes on the read me that says this image is only built for x86_64 and or where to find other arch images.

We can add the not for the arch but should not "official" users of OpenShift logging that is based on internally built images be using RH documentation that points them to the proper image, not the README here

Yes downstream Red Hat users are given documents that tell you to use images that have multi-arch manifests.

However if your upstream or stumble on this it's very simple to assume that the quay.io images and the registry.redhat.io images are the same.

IMO a simple warning in the readme is enough to close this.

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale