ClusterLogging - MountVolume.SetUp failed for volume "collector" : secret "collector" not found
abdennour opened this issue · 1 comments
Describe the bug
When deploying the clusterlogging instance mentioned below, we observed that all pods collector-xxxxx
keep on ContainerCreating
state. At the first glance, we thought it's about timeout on pulling images, but after describing one of thoese pods, we saw that it cannot mount one of secrets as volume.
We had another environment with old version of operator, and we can see collector
secret there.
Why the recent version of this operator does not generate that secret?
Is there any backwrad-compatible way to fix that ?
Environment
- OCP v4.11
- elasticsearch-operator (channel=stable-5.5)
- logging-operator(channel=stable-5.5)
- ClusterLogging instance
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
argocd.argoproj.io/sync-wave: '7'
labels:
app.kubernetes.io/instance: logging-stack
name: instance
namespace: openshift-logging
spec:
collection:
resources:
limits:
memory: 736Mi
requests:
cpu: 100m
memory: 736Mi
type: fluentd
logStore:
elasticsearch:
nodeCount: 3
nodeSelector:
node-role.kubernetes.io/infra: ''
proxy:
resources:
limits:
memory: 256Mi
requests:
memory: 256Mi
redundancyPolicy: SingleRedundancy
resources:
limits:
memory: 8Gi
requests:
cpu: '1'
memory: 8Gi
storage:
size: 20Gi
storageClassName: ntnx-block-storage-iops
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
operator: Exists
retentionPolicy:
application:
maxAge: 20d
audit:
maxAge: 60d
infra:
maxAge: 20d
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector:
node-role.kubernetes.io/infra: ''
replicas: 2
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
operator: Exists
type: kibana
Logs
Capture relevant logs, post them to http://gist.github.com/ and post the links in the issue.
# oc -n openshift-logging describe pod collector-xxxx
.....
Warning FailedMount 9s (x7 over 40s) kubelet MountVolume.SetUp failed for volume "collector" : secret "collector" not found
Expected behavior
Expected from the operator to auto-generate this secret collector
Actual behavior
Missing secret & collector pods cannot run because they cannot mount that secret.
To Reproduce
Steps to reproduce the behavior:
- Having same environment as mentioned above
- Having same clusterlogging instance as above
- Waiting for the collectors to be Running, but will never be in Running state... You will see it stuck on ContainerCreating state.
Additional context
N/A
Please open an issue at http://issues.redhat.com for the LOG project. I can not speak to what is the issue without more information. Please open the issue and provide a must-gather.