initContainer defaults preventing existing YAML from working
aquarapid opened this issue · 0 comments
aquarapid commented
I have a (test) operator cluster YAML that has a keyspace section looking like this:
keyspaces:
- name: main
partitionings:
- equal:
parts: 2
shardTemplate:
databaseInitScriptSecret:
name: example-cluster-config
key: init_db.sql
replication:
enforceSemiSync: true
tabletPools:
- cell: uscentral1a
type: replica
replicas: 1
vttablet:
extraFlags:
db_charset: utf8mb4
resources:
requests:
cpu: 10m
memory: 256Mi
mysqld:
resources:
requests:
cpu: 10m
memory: 256Mi
dataVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
storageClassName: uscentral1a-ssd
- cell: uscentral1c
type: replica
replicas: 1
vttablet:
extraFlags:
db_charset: utf8mb4
resources:
requests:
cpu: 10m
memory: 256Mi
mysqld:
resources:
requests:
cpu: 10m
memory: 256Mi
dataVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
storageClassName: uscentral1c-ssd
- cell: uscentral1f
type: replica
replicas: 1
vttablet:
extraFlags:
db_charset: utf8mb4
resources:
requests:
cpu: 10m
memory: 256Mi
mysqld:
resources:
requests:
cpu: 10m
memory: 256Mi
dataVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
storageClassName: uscentral1f-ssd
This used to work just fine. But since PR #216 I get the multiple errors (one for each vttablet) when applying this (can be seen with kubectl get events
or in the operator logs):
2m38s Warning CreateFailed vitessshard/example-main-x-80-84bc4397 failed to create Pod example-vttablet-uscentral1f-2191363457-7dead7ff: Pod "example-vttablet-uscentral1f-2191363457-7dead7ff" is invalid: [spec.initContainers[0].resources.requests: Invalid value: "256Mi": must be less than or equal to memory limit, spec.initContainers[1].resources.requests: Invalid value: "256Mi": must be less than or equal to memory limit]
I can resolve this in one of two ways:
- Reverting #216
- Reducing the memory requests for the vttablets to
128Mi
. However, this isn't a real solution, since it would pack too many vttablet pods into a node where I want to run fewer/larger vttablets...
Environment is GKE (1.19.16-gke.1500
).