airflow-helm/charts

Postgresql is not coming up. In pending state.

Closed this issue · 1 comments

Checks

Chart Version

8.9.0

Kubernetes Version

Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.10

Helm Version

version.BuildInfo{Version:"v3.15.1", GitCommit:"e211f2aa62992bd72586b395de50979e31231829", GitTreeState:"clean", GoVersion:"go1.22.3"}

Description

Postgresql pod is not coming up , in pending state with.

Relevant Logs

postgres itself is not coming up-

airflow-postgresql-0                                        0/1      Pending                          
│ airflow-scheduler-5df8fb6987-vmldv          0/2      Init:CrashLoopBackOff              
│ airflow-sync-users-8867fd479-q2db8        0/1      Init:CrashLoopBackOff          
│ airflow-triggerer-5786fc4f66-5xfx6             0/1      Init:CrashLoopBackOff              
│ airflow-web-68dc78c8bf-zmwn7                  0/1      Init:CrashLoopBackOff       


After checking log of  airflow-postgresql-0  : its blank.


After checking describe on airflow-postgresql-0   gettting below:
Events:                                                                                                                                                                                                                                                    │
│   Type     Reason            Age                 From               Message                                                                                                                                                                                │
│   ----     ------            ----                ----               -------                                                                                                                                                                                │
│   Warning  FailedScheduling  23m                 default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition                                                                                           │
│   Warning  FailedScheduling  3m5s (x2 over 13m)  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition                                                                                           │
│

Custom Helm Values

## DATABASE | Embedded Postgres
###################################
postgresql:
  ## if the `stable/postgresql` chart is used
  ## - [WARNING] the embedded Postgres is NOT SUITABLE for production deployments of Airflow
  ## - [WARNING] consider using an external database with `externalDatabase.*`
  ## - set to `false` if using `externalDatabase.*`
  ##
  enabled: true

  ## configs for the postgres container image
  ##
  image:
    registry: ghcr.io
    repository: airflow-helm/postgresql-bitnami
    tag: 11.22-patch.0
    pullPolicy: IfNotPresent

  ## the postgres database to use
  ##
  postgresqlDatabase: airflow

  ## the postgres user to create
  ##
  postgresqlUsername: postgres

  ## the postgres user's password
  ##
  postgresqlPassword: airflow

  ## the name of a pre-created secret containing the postgres password
  ##
  existingSecret: ""

  ## the key within `postgresql.existingSecret` containing the password string
  ##
  existingSecretKey: "postgresql-password"

  ## configs for the PVC of postgresql
  ##
  persistence:
    ## if postgres will use Persistent Volume Claims to store data
    ## - [WARNING] if false, data will be LOST as postgres Pods restart
    ##
    enabled: true

    ## the name of the StorageClass used by the PVC
    ##
    storageClass: oci-bv-oi

    ## the access modes of the PVC
    ##
    accessModes:
      - ReadWriteOnce

    ## the size of PVC to request
    ##
    size: 8Gi

@infa-kvaibhav this will almost certainly be an issue with your cluster not provisioning or binding the PersistentVolume correctly to the Pod.

Search around for running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition based on whatever CSI driver you are using.

Also, check the logs of the Pods for your CSI driver.


Because this is unrelated to the chart, I am going to close it, but feel free to provide an update if you figure out what was wrong with your cluster.