check-db init container fails on init_fs_encoiding using KubernetesExecutor
ArcanElement opened this issue · 1 comments
Checks
- I have checked for existing issues.
- This report is about the
User-Community Airflow Helm Chart
.
Chart Version
8.8.0
Kubernetes Version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"windows/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:24:23Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.27) and server (1.15) exceeds the supported minor version skew of +/-1
Helm Version
version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.17.13"}
Description
I originally ran into this issue trying to upgrade a working instance of 8.7.1 to 8.8.0. However, a fresh install of either version of the charts (8.7.1 using the helm --version arg) now fails with this error.
The install goes through, but none of the pods run, because the check-db init container fails with the error shown in Relevant Logs.
We are using a custom repo for the image, but the image we are using is a direct pull and push from the docker hub image. I've replaced some identifying data (the image repo and ingress domain) in the values with <> tags. The logs are the full log output of the check-db container in the db-migrations pod.
Relevant Logs
bash: warning: setlocale: LC_ALL: cannot change locale (C.UTF-8)
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (C.UTF-8)
Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = '/usr/local/bin/python'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/usr/local/bin/python'
sys.base_prefix = '/usr/local'
sys.base_exec_prefix = '/usr/local'
sys.platlibdir = 'lib'
sys.executable = '/usr/local/bin/python'
sys.prefix = '/usr/local'
sys.exec_prefix = '/usr/local'
sys.path = [
'/usr/local/lib/python39.zip',
'/usr/local/lib/python3.9',
'/usr/local/lib/python3.9/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007fa8772f5b80 (most recent call first):
<no Python frame>
Custom Helm Values
########################################
## CONFIG | Airflow Configs
########################################
airflow:
## configs for the airflow container image
##
image:
# We are using a custom repo to mirror the image, but it is a direct pull and push of the airflow image
repository: <custom repo>/airflow
tag: 2.6.3-python3.9
pullPolicy: IfNotPresent
## the airflow executor type to use
## - allowed values: "CeleryExecutor", "KubernetesExecutor", "CeleryKubernetesExecutor"
## - customize the "KubernetesExecutor" pod-template with `airflow.kubernetesPodTemplate.*`
##
executor: KubernetesExecutor
###################################
## COMPONENT | Airflow Scheduler
###################################
scheduler:
logCleanup:
enabled: false
###################################
## COMPONENT | Airflow Workers
###################################
workers:
## if the airflow workers StatefulSet should be deployed
##
enabled: false
###################################
## COMPONENT | Flower
###################################
flower:
## if the airflow flower UI should be deployed
##
enabled: false
###################################
## CONFIG | Airflow Logs
###################################
logs:
## configs for the logs PVC
##
persistence:
## if a persistent volume is mounted at `logs.path`
##
enabled: true
## the name of the StorageClass used by the PVC
## - if set to "", then `PersistentVolumeClaim/spec.storageClassName` is omitted
## - if set to "-", then `PersistentVolumeClaim/spec.storageClassName` is set to ""
##
# empty string means cluster-edfas
storageClass: ""
## the access mode of the PVC
## - [WARNING] must be "ReadWriteMany" or airflow pods will fail to start
##
accessMode: ReadWriteMany
## the size of PVC to request
##
size: 1Gi
###################################
## CONFIG | Kubernetes Ingress
###################################
ingress:
## if we should deploy Ingress resources
##
enabled: true
## the `apiVersion` to use for Ingress resources
## - for Kubernetes 1.19 and later: "networking.k8s.io/v1"
## - for Kubernetes 1.18 and before: "networking.k8s.io/v1beta1"
##
apiVersion: networking.k8s.io/v1beta1
## configs for the Ingress of the web Service
##
web:
## the path for the web Ingress
## - [WARNING] do NOT include the trailing slash (for root, set an empty string)
##
## ____ EXAMPLE _______________
## # webserver URL: http://example.com/airflow
## path: "/airflow"
##
path: ""
## the hostname for the web Ingress
##
host: test-airflow-web.<domain>
###################################
## CONFIG | Kubernetes ServiceAccount
###################################
serviceAccount:
## if a Kubernetes ServiceAccount is created
## - if `false`, you must create the service account outside this chart with name: `serviceAccount.name`
##
create: true
## the name of the ServiceAccount
## - by default the name is generated using the `airflow.serviceAccountName` template in `_helpers/common.tpl`
##
name: "test-airflow"
###################################
## DATABASE | PgBouncer
###################################
pgbouncer:
## if the pgbouncer Deployment is created
##
enabled: false
## configs for the pgbouncer container image
##
# image:
# repository: ctl-devops.aero.org/edfas/airflow/pgbouncer
# tag: 1.17.0-patch.0
# pullPolicy: IfNotPresent
# uid: 1001
# gid: 1001
###################################
## DATABASE | Embedded Postgres
###################################
postgresql:
## if the `stable/postgresql` chart is used
## - [WARNING] the embedded Postgres is NOT SUITABLE for production deployments of Airflow
## - [WARNING] consider using an external database with `externalDatabase.*`
## - set to `false` if using `externalDatabase.*`
##
enabled: false
###################################
## DATABASE | External Database
###################################
externalDatabase:
## the type of external database
## - allowed values: "mysql", "postgres"
##
type: postgres
## the host of the external database
##
host: postgresql
## the port of the external database
##
port: 5432
## the database/scheme to use within the external database
##
database: test_airflow
## the username for the external database
##
user: postgres
## the name of a pre-created secret containing the external database user
## - if set, this overrides `externalDatabase.user`
##
userSecret: ""
## the key within `externalDatabase.userSecret` containing the user string
##
userSecretKey: "postgresql-user"
## the password for the external database
## - [WARNING] to avoid storing the password in plain-text within your values,
## create a Kubernetes secret and use `externalDatabase.passwordSecret`
##
password: "admin"
## the name of a pre-created secret containing the external database password
## - if set, this overrides `externalDatabase.password`
##
passwordSecret: ""
## the key within `externalDatabase.passwordSecret` containing the password string
##
passwordSecretKey: "postgresql-password"
## extra connection-string properties for the external database
##
## ____ EXAMPLE _______________
## # require SSL (only for Postgres)
## properties: "?sslmode=require"
##
properties: ""
###################################
## DATABASE | Embedded Redis
###################################
redis:
## if the `stable/redis` chart is used
## - set to `false` if `airflow.executor` is `KubernetesExecutor`
## - set to `false` if using `externalRedis.*`
##
enabled: false
This issue is no longer occurring. We have not changed our docker image, nor the values file in any way, but the release now works as expected. I expect there is something that was changed or fixed in the Kubernetes architecture that fixed this issue. If I find out what changed, I will comment here.