mariadb-operator/mariadb-operator

[Bug] Galera cluster failed to recovery

Opened this issue · 1 comments

Documentation

Describe the bug
After trying to restore a dump in Galera cluster, the pods start crashing and then the galera cluster is not able to recover.
I do some test that i saw in this issue [https://github.com//issues/580] but I still facing the same issue.

NAME                                        READY   STATUS             RESTARTS         AGE
mariadb-01-0                                1/2     CrashLoopBackOff   13 (2m11s ago)   43m
mariadb-01-1                                2/2     Running            0                43m
mariadb-01-2                                1/2     CrashLoopBackOff   10 (5m5s ago)    37m

The principal pod in the galera cluster is in running status but it's doesn't work.

I have Cert-Manager installed on my cluster and i set the ´webhook.cert.certManager.enabled´ value in ´true´

Values

Mariadb-operator

nameOverride: ""
fullnameOverride: ""

image:
  repository: ghcr.io/mariadb-operator/mariadb-operator
  pullPolicy: IfNotPresent
  # -- Image tag to use. By default the chart appVersion is used
  tag: ""
imagePullSecrets: []

# -- Controller log level
logLevel: debug

# -- Cluster DNS name
clusterName: cluster.local

ha:
  # -- Enable high availability
  enabled: true
  # -- Number of replicas
  replicas: 3

metrics:
  # -- Enable operator internal metrics. Prometheus must be installed in the cluster
  enabled: false
  serviceMonitor:
    # -- Enable controller ServiceMonitor
    enabled: true
    # -- Labels to be added to the controller ServiceMonitor
    additionalLabels: {}
    # release: kube-prometheus-stack
    # --  Interval to scrape metrics
    interval: 30s
    # -- Timeout if metrics can't be retrieved in given time interval
    scrapeTimeout: 25s

serviceAccount:
  # -- Specifies whether a service account should be created
  enabled: true
  # -- Automounts the service account token in all containers of the Pod
  automount: true
  # -- Annotations to add to the service account
  annotations: {}
  # -- Extra Labels to add to the service account
  extraLabels: {}
  # -- The name of the service account to use.
  # If not set and enabled is true, a name is generated using the fullname template
  name: ""

rbac:
  # -- Specifies whether RBAC resources should be created
  enabled: true

# -- Extra arguments to be passed to the controller entrypoint
extrArgs: []

# -- Extra environment variables to be passed to the controller
extraEnv: []

# -- Extra volumes to pass to pod.
extraVolumes: []

# -- Extra volumes to mount to the container.
extraVolumeMounts: []

# -- Annotations to add to controller Pod
podAnnotations: {}

# -- Security context to add to controller Pod
podSecurityContext: {}

# -- Security context to add to controller container
securityContext: {}

# -- Resources to add to controller container
resources: {}
# requests:
#   cpu: 10m
#   memory: 32Mi

# -- Node selectors to add to controller Pod
nodeSelector: {}

# -- Tolerations to add to controller Pod
tolerations: []

# -- Affinity to add to controller Pod
affinity: {}

webhook:
  image:
    repository: ghcr.io/mariadb-operator/mariadb-operator
    pullPolicy: IfNotPresent
    # -- Image tag to use. By default the chart appVersion is used
    tag: ""
  imagePullSecrets: []
  ha:
    # -- Enable high availability
    enabled: false
    # -- Number of replicas
    replicas: 3
  cert:
    caPath: /tmp/k8s-webhook-server/certificate-authority

    certManager:
      # -- Whether to use cert-manager to issue and rotate the certificate. If set to false, mariadb-operator's cert-controller will be used instead.
      enabled: true
      # -- Issuer reference to be used in the Certificate resource. If not provided, a self-signed issuer will be used.
      issuerRef: {}
      # -- Duration to be used in the Certificate resource,
      duration: ""
      # -- Renew before duration to be used in the Certificate resource.
      renewBefore: ""
    # -- Annotatioms to be added to webhook TLS secret.
    secretAnnotations: {}
    # -- Labels to be added to webhook TLS secret.
    secretLabels: {}

    # -- Path where the certificate will be mounted. 'tls.crt' and 'tls.key' certificates files should be under this path.
    path: /tmp/k8s-webhook-server/serving-certs
  # -- Port to be used by the webhook server
  port: 9443
  # -- Expose the webhook server in the host network
  hostNetwork: false
  serviceMonitor:
    # -- Enable webhook ServiceMonitor. Metrics must be enabled
    enabled: true
    # -- Labels to be added to the webhook ServiceMonitor
    additionalLabels: {}
    # release: kube-prometheus-stack
    # --  Interval to scrape metrics
    interval: 30s
    # -- Timeout if metrics can't be retrieved in given time interval
    scrapeTimeout: 25s
  serviceAccount:
    # -- Specifies whether a service account should be created
    enabled: true
    # -- Automounts the service account token in all containers of the Pod
    automount: true
    # -- Annotations to add to the service account
    annotations: {}
    # -- Extra Labels to add to the service account
    extraLabels: {}
    # -- The name of the service account to use.
    # If not set and enabled is true, a name is generated using the fullname template
    name: ""
  # -- Annotations for webhook configurations.
  annotations: {}
  # -- Extra arguments to be passed to the webhook entrypoint
  extrArgs: []
  # -- Extra volumes to pass to webhook Pod
  extraVolumes: []
  # -- Extra volumes to mount to webhook container
  extraVolumeMounts: []
  # -- Annotations to add to webhook Pod
  podAnnotations: {}
  # -- Security context to add to webhook Pod
  podSecurityContext: {}
  # -- Security context to add to webhook container
  securityContext: {}
  # -- Resources to add to webhook container
  resources: {}
  # requests:
  #   cpu: 10m
  #   memory: 32Mi
  # -- Node selectors to add to controller Pod
  nodeSelector: {}
  # -- Tolerations to add to controller Pod
  tolerations: []
  # -- Affinity to add to controller Pod
  affinity: {}

certController:
  # -- Specifies whether the cert-controller should be created.
  enabled: true
  image:
    repository: ghcr.io/mariadb-operator/mariadb-operator
    pullPolicy: IfNotPresent
    # -- Image tag to use. By default the chart appVersion is used
    tag: ""
  imagePullSecrets: []
  ha:
    # -- Enable high availability
    enabled: false
    # -- Number of replicas
    replicas: 3
  # -- CA certificate validity. It must be greater than certValidity.
  caValidity: 35064h
  # -- Certificate validity.
  certValidity: 8766h
  # -- Duration used to verify whether a certificate is valid or not.
  lookaheadValidity: 2160h
  # -- Requeue duration to ensure that certificate gets renewed.
  requeueDuration: 5m
  serviceMonitor:
    # -- Enable cert-controller ServiceMonitor. Metrics must be enabled
    enabled: true
    # -- Labels to be added to the cert-controller ServiceMonitor
    additionalLabels: {}
    # release: kube-prometheus-stack
    # --  Interval to scrape metrics
    interval: 30s
    # -- Timeout if metrics can't be retrieved in given time interval
    scrapeTimeout: 25s
  serviceAccount:
    # -- Specifies whether a service account should be created
    enabled: true
    # -- Automounts the service account token in all containers of the Pod
    automount: true
    # -- Annotations to add to the service account
    annotations: {}
    # -- Extra Labels to add to the service account
    extraLabels: {}
    # -- The name of the service account to use.
    # If not set and enabled is true, a name is generated using the fullname template
    name: ""
  # -- Extra arguments to be passed to the cert-controller entrypoint
  extrArgs: []
  # -- Extra volumes to pass to cert-controller Pod
  extraVolumes: []
  # -- Extra volumes to mount to cert-controller container
  extraVolumeMounts: []
  # -- Annotations to add to cert-controller Pod
  podAnnotations: {}
  # -- Security context to add to cert-controller Pod
  podSecurityContext: {}
  # -- Security context to add to cert-controller container
  securityContext: {}
  # -- Resources to add to cert-controller container
  resources: {}
  # requests:
  #   cpu: 10m
  #   memory: 32Mi
  # -- Node selectors to add to controller Pod
  nodeSelector: {}
  # -- Tolerations to add to controller Pod
  tolerations: []
  # -- Affinity to add to controller Pod
  affinity: {}

MariaDB resource

apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
  name: mariadb-01
  namespace: databases
spec:
  affinity: {}
  database: mariadb
  image: mariadb:11.0.3
  metrics:
    enabled: true
    exporter:
      image: prom/mysqld-exporter:v0.15.1
      port: 9104
    passwordSecretKeyRef:
      key: password
      name: mariadb-01-metrics-password
    serviceMonitor: {}
    username: mariadb-01-metrics
  myCnf: |
    [mariadb]
    bind-address=*
    default_storage_engine=InnoDB
    binlog_format=row
    innodb_autoinc_lock_mode=2
    max_allowed_packet=256M
    innodb_buffer_pool_size=8G
    innodb_log_file_size=1G
    performance_schema=ON
    join_buffer_size=256K
  myCnfConfigMapKeyRef:
    key: my.cnf
    name: mariadb-01-config
  passwordSecretKeyRef:
    key: password
    name: mariadb
  podDisruptionBudget:
    maxUnavailable: 33%
  port: 3306
  replicas: 3

  galera:
    enabled: true
    primary:
      automaticFailover: true
    sst: mariabackup
    availableWhenDonor: false
    galeraLibPath: /usr/lib/galera/libgalera_smm.so
    replicaThreads: 1
    # providerOptions:
    #   gcs.fc_limit: '64'
    # agent:
    #   image: ghcr.io/mariadb-operator/mariadb-operator:v0.0.28
    #   port: 5555
    #   kubernetesAuth:
    #     enabled: true
    #   gracefulShutdownTimeout: 1s
    recovery:
      enabled: true
      minClusterSize: 50%
      # clusterMonitorInterval: 10s
      clusterHealthyTimeout: 30s
      clusterBootstrapTimeout: 10m
      podRecoveryTimeout: 3m
      podSyncTimeout: 3m
    initContainer:
      image: ghcr.io/mariadb-operator/mariadb-operator:v0.0.28
    # initJob:
    #   metadata:
    #     labels:
    #       sidecar.istio.io/inject: "false"
    #   args:
    #   - "--verbose"
    #   affinity:
    #     antiAffinityEnabled: true
    #   resources:
    #     requests:
    #       cpu: 100m
    #       memory: 128Mi
    #     limits:
    #       memory: 1Gi
    config:
      reuseStorageVolume: true
      volumeClaimTemplate:
        resources:
          requests:
            storage: 300Mi
        accessModes:
        - ReadWriteOnce

  podSecurityContext:
    runAsUser: 0
  resources:
    limits:
      memory: 16Gi
    requests:
      cpu: 100m
      memory: 8Gi
  rootEmptyPassword: false
  rootPasswordSecretKeyRef:
    key: root-password
    name: mariadb
  serviceAccountName: mariadb-01
  storage:
    ephemeral: false
    resizeInUseVolumes: true
    size: 100Gi
    storageClassName: default
    volumeClaimTemplate:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 100Gi
      storageClassName: default
    waitForVolumeResize: true
  tolerations:
  - effect: NoSchedule
    key: k8s.mariadb.com/ha
    operator: Exists
  updateStrategy:
    type: RollingUpdate
  username: mariadb

# connection:
#   secretName: connection-zabbix
#   secretTemplate:
#     key: dsn
#   healthCheck:
#     interval: 10s
#     retryInterval: 3s
#   params:
#     parseTime: "true"

Logs

Mariadb-01-1

agent {"level":"info","ts":1717601481.1399832,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601481.1410058,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
mariadb =================================================
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: Non-primary view
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: Server status change connected -> connected
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
agent {"level":"info","ts":1717601491.1425786,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601496.1396935,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: ================================================
mariadb View:
mariadb   id: 00000000-0000-0000-0000-000000000000:-1
agent {"level":"info","ts":1717601496.1412778,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601501.1393824,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
mariadb   status: non-primary
mariadb   protocol_version: -1
mariadb   capabilities: 
mariadb   final: no
agent {"level":"info","ts":1717601501.1394558,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
mariadb   own_index: 0
mariadb   members(1):
mariadb     0: 102ed9c1-2341-11ef-ac26-0bf5635aff9e, mariadb-01-1
mariadb =================================================
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: Non-primary view
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: Server status change connected -> connected
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
mariadb 2024-06-05 15:32:05 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
mariadb 2024-06-05 15:32:07 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 2000000 lost: 1 last_data_recv: 1717007368 cwnd: 1 last_queued_since: 1717307989282898 last_delivered_since: 1717307989282898 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:21 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.83:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717021868 cwnd: 1 last_queued_since: 1717322490656951 last_delivered_since: 1717322490656951 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:26 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 
mariadb 2024-06-05 15:32:32 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') reconnecting to 0a951ac1-ac6d (tcp://10.233.12.83:4567), attempt 600
mariadb 2024-06-05 15:32:35 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.83:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 2000000 lost: 1 last_data_recv: 1717035868 cwnd: 1 last_queued_since: 1717336491996336 last_delivered_since: 1717336491996336 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:36 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717036872 cwnd: 1 last_queued_since: 1717337492093031 last_delivered_since: 1717337492093031 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:40 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.83:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717040372 cwnd: 1 last_queued_since: 1717340992479811 last_delivered_since: 1717340992479811 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:40 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 2000000 lost: 1 last_data_recv: 1717040872 cwnd: 1 last_queued_since: 1717341492505677 last_delivered_since: 1717341492505677 send_queue_length: 0 send_queue_bytes: 0
agent {"level":"info","ts":1717601546.1384304,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601551.1372435,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601551.1380827,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601556.1407156,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601556.1411502,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601561.137838,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601561.1395278,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
Stream closed EOF for databases/mariadb-01-1 (init)
mariadb 2024-06-05 15:32:44 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 2000000 lost: 1 last_data_recv: 1717044872 cwnd: 1 last_queued_since: 1717345492838188 last_delivered_since: 1717345492838188 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:45 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.83:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717045372 cwnd: 1 last_queued_since: 1717345992875320 last_delivered_since: 1717345992875320 send_queue_length: 0 send_queue_bytes: 0
agent {"level":"info","ts":1717601566.1368537,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601566.1387043,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
mariadb 2024-06-05 15:32:46 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') reconnecting to ad65216e-bde4 (tcp://10.233.12.91:4567), attempt 30
mariadb 2024-06-05 15:32:49 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 2000000 lost: 1 last_data_recv: 1717049372 cwnd: 1 last_queued_since: 1717349993207897 last_delivered_since: 1717349993207897 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:50 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.83:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717050372 cwnd: 1 last_queued_since: 1717350993268884 last_delivered_since: 1717350993268884 send_queue_length: 0 send_queue_bytes: 0
agent {"level":"info","ts":1717601571.1393573,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601571.1393747,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
mariadb 2024-06-05 15:32:53 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.138:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717053872 cwnd: 1 last_queued_since: 1717354493584976 last_delivered_since: 1717354493584976 send_queue_length: 0 send_queue_bytes: 0
mariadb 2024-06-05 15:32:54 0 [Note] WSREP: (102ed9c1-ac26, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.233.12.83:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 1717054872 cwnd: 1 last_queued_since: 1717355493660087 last_delivered_since: 1717355493660087 send_queue_length: 0 send_queue_bytes: 0

Other mariadb-01 pods

init {"level":"info","ts":1717599238.817479,"msg":"Starting init"}
init {"level":"info","ts":1717599238.8512795,"msg":"Configuring Galera"}
init {"level":"info","ts":1717599238.8522012,"msg":"Init done"}
agent {"level":"info","ts":1717599239.6220412,"msg":"Starting agent"}
agent {"level":"info","ts":1717599239.6230922,"logger":"server","msg":"server listening","addr":":5555"}
agent {"level":"info","ts":1717599262.9525533,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601512.9210386,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601512.9218042,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601517.9219253,"logger":"handler.probe.liveness","msg":"Galera not ready. Returning OK to facilitate recovery"}
agent {"level":"info","ts":1717601517.9222922,"logger":"handler.probe.readiness","msg":"Galera not ready. Returning OK to facilitate recovery"}
Stream closed EOF for databases/mariadb-01-2 (init)
mariadb 2024-06-05 15:31:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.0.3+maria~ubu2204 started.
mariadb 2024-06-05 15:31:28+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mariadb 2024-06-05 15:31:28+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.0.3+maria~ubu2204 started.
mariadb 2024-06-05 15:31:28+00:00 [Note] [Entrypoint]: MariaDB upgrade information missing, assuming required
mariadb 2024-06-05 15:31:28+00:00 [Note] [Entrypoint]: MariaDB upgrade (mariadb-upgrade) required, but skipped due to $MARIADB_AUTO_UPGRADE setting
mariadb 2024-06-05 15:31:28 0 [Note] Starting MariaDB 11.0.3-MariaDB-1:11.0.3+maria~ubu2204 source revision 70905bcb9059dcc40db3b73bc46a36c7d40f1e10 as process 1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Loading provider /usr/lib/galera/libgalera_smm.so initial position: 00000000-0000-0000-0000-000000000000:-1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera/libgalera_smm.so'
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: wsrep_load(): Galera 26.4.14(r06a0c285) by Codership Oy <info@codership.com> loaded successfully.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Initializing allowlist service v1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Initializing event service v1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: CRC-32C: using 64-bit x86 acceleration.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Found saved state: 2adf9749-1f4c-11ef-879b-025c3571947a:-1, safe_to_bootstrap: 0
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: GCache DEBUG: opened preamble:
mariadb Version: 2
mariadb UUID: 2adf9749-1f4c-11ef-879b-025c3571947a
mariadb Seqno: 585739 - 586597
mariadb Offset: 1280
mariadb Synced: 1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Recovering GCache ring buffer: version: 2, UUID: 2adf9749-1f4c-11ef-879b-025c3571947a, offset: 1280
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: GCache::RingBuffer initial scan...  0.0% (        0/134217752 bytes) complete.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: GCache::RingBuffer initial scan...100.0% (134217752/134217752 bytes) complete.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Recovering GCache ring buffer: found gapless sequence 585739-586597
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: GCache::RingBuffer unused buffers scan...  0.0% (       0/27808816 bytes) complete.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Recovering GCache ring buffer: found 7/866 locked buffers
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Recovering GCache ring buffer: free space: 106411376/134217728
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: GCache::RingBuffer unused buffers scan...100.0% (27808816/27808816 bytes) complete.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 10.233.12.91; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = yes; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.keep_plaintext_size = 128M; gcache.mem_size = 0; gcache.name = galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.fc_single_primary = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.listen_addr 
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Start replication
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Connecting with bootstrap option: 0
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Setting GCS initial position to 00000000-0000-0000-0000-000000000000:-1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: Using CRC-32C for message checksums.
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: backend: asio
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: gcomm thread scheduling priority set to other:0 
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: restore pc from disk failed
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: GMCast version 0
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: (ad65216e-bde4, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: (ad65216e-bde4, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: EVS version 1
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: gcomm: connecting to group 'mariadb-operator', peer 'mariadb-01-0.mariadb-01-internal.databases.svc.cluster.local:,mariadb-01-1.mariadb-01-internal.databases.svc.cluster.local:,mariadb-01-2.mariadb-01-internal.databases.svc.cluster.local:'
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: (ad65216e-bde4, 'tcp://0.0.0.0:4567') Found matching local endpoint for a connection, blacklisting address tcp://10.233.12.91:4567
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: (ad65216e-bde4, 'tcp://0.0.0.0:4567') connection established to 102ed9c1-ac26 tcp://10.233.11.28:4567
mariadb 2024-06-05 15:31:28 0 [Note] WSREP: (ad65216e-bde4, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
mariadb 2024-06-05 15:31:29 0 [Note] WSREP: EVS version upgrade 0 -> 1
mariadb 2024-06-05 15:31:29 0 [Note] WSREP: declaring 102ed9c1-ac26 at tcp://10.233.11.28:4567 stable
mariadb 2024-06-05 15:31:29 0 [Note] WSREP: PC protocol upgrade 0 -> 1
mariadb 2024-06-05 15:31:29 0 [Warning] WSREP: no nodes coming from prim view, prim not possible
mariadb 2024-06-05 15:31:29 0 [Note] WSREP: view(view_id(NON_PRIM,102ed9c1-ac26,51) memb {
mariadb     102ed9c1-ac26,0
mariadb     ad65216e-bde4,0
mariadb } joined {
mariadb } left {
mariadb } partitioned {
mariadb     0a951ac1-ac6d,0
mariadb     16ef9db4-87de,0
mariadb     40f1ae94-abfd,0
mariadb     506b0f46-97d0,0
mariadb     552ec306-bc95,0
mariadb     68ea8863-9973,0
mariadb     7151bc67-83c3,0
mariadb     83f8545c-97e8,0
mariadb     84ff0683-9bf2,0
mariadb     8be234fc-8aa7,0
mariadb     9f9374d6-9611,0
mariadb     a2f1afe1-8799,0
mariadb     b6ef78d5-96ff,0
mariadb     bbaf28a8-9432,0
mariadb     c74f85e3-8749,0
mariadb     dedd796f-8bfe,0
mariadb     e3f8903d-99f1,0
mariadb     f70504db-a32b,0
mariadb })
mariadb 2024-06-05 15:31:32 0 [Note] WSREP: (ad65216e-bde4, 'tcp://0.0.0.0:4567') turning message relay requesting off
mariadb 2024-06-05 15:31:59 0 [Note] WSREP: Deferred close timer started for socket with remote endpoint: tcp://10.233.11.28:4567
mariadb 2024-06-05 15:31:59 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
mariadb      at ./gcomm/src/pc.cpp:connect():160
mariadb 2024-06-05 15:31:59 0 [ERROR] WSREP: ./gcs/src/gcs_core.cpp:gcs_core_open():221: Failed to open backend connection: -110 (Connection timed out)
mariadb 2024-06-05 15:32:00 0 [Note] WSREP: Deferred close timer destruct
mariadb 2024-06-05 15:32:00 0 [ERROR] WSREP: ./gcs/src/gcs.cpp:gcs_open():1669: Failed to open channel 'mariadb-operator' at 'gcomm://mariadb-01-0.mariadb-01-internal.databases.svc.cluster.local,mariadb-01-1.mariadb-01-internal.databases.svc.cluster.local,mariadb-01-2.mariadb-01-internal.databases.svc.cluster.local': -110 (Connection timed out)
mariadb 2024-06-05 15:32:00 0 [ERROR] WSREP: gcs connect failed: Connection timed out
mariadb 2024-06-05 15:32:00 0 [ERROR] WSREP: wsrep::connect(gcomm://mariadb-01-0.mariadb-01-internal.databases.svc.cluster.local,mariadb-01-1.mariadb-01-internal.databases.svc.cluster.local,mariadb-01-2.mariadb-01-internal.databases.svc.cluster.local) failed: 7
mariadb 2024-06-05 15:32:00 0 [ERROR] Aborting
Stream closed EOF for databases/mariadb-01-2 (mariadb)

mariadb-operator

{"level":"debug","ts":1717602192.4230309,"logger":"galera.recovery.cluster","msg":"Error getting bootstrap source","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"1942b43a-0a4b-42c1-84f2-ff76598eb4c8","err":"recovery status not completed"}
{"level":"debug","ts":1717602192.4230525,"logger":"galera.recovery.cluster","msg":"Recovery by Pod","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"1942b43a-0a4b-42c1-84f2-ff76598eb4c8"}
{"level":"debug","ts":1717602192.423058,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"1942b43a-0a4b-42c1-84f2-ff76598eb4c8","pod":"mariadb-01-0"}
{"level":"debug","ts":1717602192.423063,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"1942b43a-0a4b-42c1-84f2-ff76598eb4c8","pod":"mariadb-01-1"}
{"level":"debug","ts":1717602192.423067,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"1942b43a-0a4b-42c1-84f2-ff76598eb4c8","pod":"mariadb-01-2"}
{"level":"error","ts":1717602192.4520528,"msg":"Reconciler error","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"1942b43a-0a4b-42c1-84f2-ff76598eb4c8","error":"error reconciling Galera: 1 error occurred:\n\t* error recovering cluster: error getting bootstrap source: recovery status not completed\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":1717602192.5445921,"logger":"galera.recovery","msg":"Recovering cluster","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6"}
{"level":"debug","ts":1717602192.544613,"logger":"galera.recovery.cluster","msg":"State by Pod","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6"}
{"level":"debug","ts":1717602192.5446186,"logger":"galera.recovery.cluster","msg":"Skipping Pod state","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","pod":"mariadb-01-0"}
{"level":"debug","ts":1717602192.544626,"logger":"galera.recovery.cluster","msg":"Skipping Pod state","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","pod":"mariadb-01-1"}
{"level":"debug","ts":1717602192.5446305,"logger":"galera.recovery.cluster","msg":"Skipping Pod state","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","pod":"mariadb-01-2"}
{"level":"debug","ts":1717602192.5604854,"logger":"galera.recovery.cluster","msg":"Error getting bootstrap source","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","err":"recovery status not completed"}
{"level":"debug","ts":1717602192.5605175,"logger":"galera.recovery.cluster","msg":"Recovery by Pod","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6"}
{"level":"debug","ts":1717602192.5605226,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","pod":"mariadb-01-0"}
{"level":"debug","ts":1717602192.5605278,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","pod":"mariadb-01-1"}
{"level":"debug","ts":1717602192.5605316,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","pod":"mariadb-01-2"}
{"level":"error","ts":1717602192.5984552,"msg":"Reconciler error","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"722ff11f-beec-4c0b-84a5-ef9dd946f6d6","error":"error reconciling Galera: 1 error occurred:\n\t* error recovering cluster: error getting bootstrap source: recovery status not completed\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":1717602273.2236054,"logger":"galera.recovery","msg":"Recovering cluster","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225"}
{"level":"debug","ts":1717602273.223632,"logger":"galera.recovery.cluster","msg":"State by Pod","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225"}
{"level":"debug","ts":1717602273.2236447,"logger":"galera.recovery.cluster","msg":"Skipping Pod state","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","pod":"mariadb-01-0"}
{"level":"debug","ts":1717602273.22365,"logger":"galera.recovery.cluster","msg":"Skipping Pod state","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","pod":"mariadb-01-1"}
{"level":"debug","ts":1717602273.2236526,"logger":"galera.recovery.cluster","msg":"Skipping Pod state","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","pod":"mariadb-01-2"}
{"level":"debug","ts":1717602273.2398326,"logger":"galera.recovery.cluster","msg":"Error getting bootstrap source","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","err":"recovery status not completed"}
{"level":"debug","ts":1717602273.239857,"logger":"galera.recovery.cluster","msg":"Recovery by Pod","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225"}
{"level":"debug","ts":1717602273.2398624,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","pod":"mariadb-01-0"}
{"level":"debug","ts":1717602273.2398767,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","pod":"mariadb-01-1"}
{"level":"debug","ts":1717602273.2398796,"logger":"galera.recovery.cluster","msg":"Skipping Pod recovery","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","pod":"mariadb-01-2"}
{"level":"error","ts":1717602273.274543,"msg":"Reconciler error","controller":"mariadb","controllerGroup":"k8s.mariadb.com","controllerKind":"MariaDB","MariaDB":{"name":"mariadb-01","namespace":"databases"},"namespace":"databases","name":"mariadb-01","reconcileID":"96e95b64-74f1-485a-9e9a-f02f1e2ff225","error":"error reconciling Galera: 1 error occurred:\n\t* error recovering cluster: error getting bootstrap source: recovery status not completed\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:227"}

Mariadb-webhook

{"level":"info","ts":1717599203.1417055,"logger":"controller-runtime.builder","msg":"skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","GVK":"k8s.mariadb.com/v1alpha1, Kind=Restore"}
{"level":"info","ts":1717599203.1417353,"logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"k8s.mariadb.com/v1alpha1, Kind=Restore","path":"/validate-k8s-mariadb-com-v1alpha1-restore"}
{"level":"info","ts":1717599203.1417913,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-k8s-mariadb-com-v1alpha1-restore"}
{"level":"info","ts":1717599203.1418319,"logger":"controller-runtime.builder","msg":"skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","GVK":"k8s.mariadb.com/v1alpha1, Kind=User"}
{"level":"info","ts":1717599203.1418583,"logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"k8s.mariadb.com/v1alpha1, Kind=User","path":"/validate-k8s-mariadb-com-v1alpha1-user"}
{"level":"info","ts":1717599203.1419075,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-k8s-mariadb-com-v1alpha1-user"}
{"level":"info","ts":1717599203.1419513,"logger":"controller-runtime.builder","msg":"skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","GVK":"k8s.mariadb.com/v1alpha1, Kind=Grant"}
{"level":"info","ts":1717599203.1419804,"logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"k8s.mariadb.com/v1alpha1, Kind=Grant","path":"/validate-k8s-mariadb-com-v1alpha1-grant"}
{"level":"info","ts":1717599203.1420224,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-k8s-mariadb-com-v1alpha1-grant"}
{"level":"info","ts":1717599203.1420496,"logger":"controller-runtime.builder","msg":"skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","GVK":"k8s.mariadb.com/v1alpha1, Kind=Database"}
{"level":"info","ts":1717599203.142094,"logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"k8s.mariadb.com/v1alpha1, Kind=Database","path":"/validate-k8s-mariadb-com-v1alpha1-database"}
{"level":"info","ts":1717599203.1421459,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-k8s-mariadb-com-v1alpha1-database"}
{"level":"info","ts":1717599203.142191,"logger":"controller-runtime.builder","msg":"skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","GVK":"k8s.mariadb.com/v1alpha1, Kind=Connection"}
{"level":"info","ts":1717599203.1422205,"logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"k8s.mariadb.com/v1alpha1, Kind=Connection","path":"/validate-k8s-mariadb-com-v1alpha1-connection"}
{"level":"info","ts":1717599203.1422784,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-k8s-mariadb-com-v1alpha1-connection"}
{"level":"info","ts":1717599203.1423242,"logger":"controller-runtime.builder","msg":"skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","GVK":"k8s.mariadb.com/v1alpha1, Kind=SqlJob"}
{"level":"info","ts":1717599203.1423583,"logger":"controller-runtime.builder","msg":"Registering a validating webhook","GVK":"k8s.mariadb.com/v1alpha1, Kind=SqlJob","path":"/validate-k8s-mariadb-com-v1alpha1-sqljob"}
{"level":"info","ts":1717599203.1424,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-k8s-mariadb-com-v1alpha1-sqljob"}
{"level":"info","ts":1717599203.1424258,"logger":"setup","msg":"Starting manager"}
{"level":"info","ts":1717599203.1426141,"logger":"controller-runtime.metrics","msg":"Starting metrics server"}
{"level":"info","ts":1717599203.142692,"logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":":8080","secure":false}
{"level":"info","ts":1717599203.1430452,"msg":"starting server","kind":"health probe","addr":":8081"}
{"level":"info","ts":1717599203.1435938,"logger":"controller-runtime.webhook","msg":"Starting webhook server"}
{"level":"info","ts":1717599203.1438594,"logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":1717599203.1439207,"logger":"controller-runtime.webhook","msg":"Serving webhook server","host":"","port":9443}
{"level":"info","ts":1717599203.1440182,"logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
{"level":"debug","ts":1717599237.656405,"logger":"mariadb","msg":"Validate update","name":"mariadb-01"}
{"level":"debug","ts":1717599237.8217585,"logger":"mariadb","msg":"Validate update","name":"mariadb-01"}
{"level":"debug","ts":1717599263.017706,"logger":"mariadb","msg":"Validate update","name":"mariadb-01"}
{"level":"debug","ts":1717599263.1575184,"logger":"mariadb","msg":"Validate update","name":"mariadb-01"}

Environment details:

  • Kubernetes version: 1.28.5
  • Kubernetes distribution: AKS
  • mariadb-operator version: 0.0.28
  • Install method: helm (0.28.1)

Additional context

i´ve managed restoring from a backup done with operator 0.28 (for backup + restore!).
but just restarting deployment (helm delete + helm install) crashes mariadb. Error looks like these showed by thread opener.