Helm operator does not create pods for new CR instance after I switch helm-operator from v1.33.0 to v1.34.0
lihongbj opened this issue · 4 comments
Bug Report
What did you do?
Deploying helm chart CR failed with helm-operator v1.34.0, that CR's pods did not get created at all.
I defined a CRD kong
for helm chart Kong, and use helm operator to deploy it, and a watches.yaml
is also defined to watch kong.
When deploying, first the helm operator pod is deployed, and then a new CR instance kong/gateway
is created so kong CR instance pods should be created accordingly.
In helm operator with v1.33.0, those CR pods are created as expected while in new v1.34.0, those CR pods are NOT created at all. What's more, no new text are logged in helm operator pod log since new CR instance is created.
What did you expect to see?
Those CR's pods are created and running as in helm-operator v1.33.0, and helm operator pod log echo this like below:
kong CR:
# oc get kong
NAME STATUS
gateway Deployed
pod list:
# oc get pod
gateway-kong-69dbd4-ffbs8 2/2 Running 0 60m
gateway-kong-post-install-resources-n6w9j 0/1 Completed 0 60m
kong-operator-7c5788dfbc-2hql7 1/1 Running 0 60m
helm-operator pod log:
{"level":"info","ts":"2024-03-05T05:15:02Z","logger":"cmd","msg":"Version","Go Version":"go1.21.5","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.33.0-dirty","commit":"542966812906456a8d67cf7284fc6410b104e118"}
{"level":"info","ts":"2024-03-05T05:15:02Z","logger":"cmd","msg":"Environment variable OPERATOR_NAME has been deprecated, use --leader-election-id instead."}
{"level":"info","ts":"2024-03-05T05:15:02Z","logger":"cmd","msg":"Watching single namespace.","Namespace":"katamari"}
2024/03/05 05:15:02 Warning: Dependencies are handled in Chart.yaml since apiVersion "v2". We recommend migrating dependencies to Chart.yaml.
{"level":"info","ts":"2024-03-05T05:15:02Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2024-03-05T05:15:02Z","logger":"helm.controller","msg":"Watching resource","apiVersion":"management.example.com/v1alpha1","kind":"Kong","reconcilePeriod":"1m0s"}
{"level":"info","ts":"2024-03-05T05:15:02Z","msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":"2024-03-05T05:15:02Z","msg":"starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
{"level":"info","ts":"2024-03-05T05:15:02Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T05:15:02Z","msg":"Starting Controller","controller":"kong-controller"}
{"level":"info","ts":"2024-03-05T05:15:02Z","msg":"Starting workers","controller":"kong-controller","worker count":16}
2024/03/05 05:15:05 Warning: Dependencies are handled in Chart.yaml since apiVersion "v2". We recommend migrating dependencies to Chart.yaml.
2024/03/05 05:15:05 warning: cannot overwrite table with non table for kong.proxy.stream (map[])
2024/03/05 05:15:05 warning: cannot overwrite table with non table for kong.proxy.stream (map[])
{"level":"info","ts":"2024-03-05T05:16:55Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T05:16:55Z","logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"management.example.com/v1alpha1","ownerKind":"Kong","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":"2024-03-05T05:16:55Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T05:16:55Z","logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"management.example.com/v1alpha1","ownerKind":"Kong","apiVersion":"v1","kind":"ConfigMap"}
{"level":"info","ts":"2024-03-05T05:16:55Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T05:16:55Z","logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"management.example.com/v1alpha1","ownerKind":"Kong","apiVersion":"batch/v1","kind":"CronJob"}
{"level":"info","ts":"2024-03-05T05:16:55Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T05:16:55Z","logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"management.example.com/v1alpha1","ownerKind":"Kong","apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy"}
{"level":"info","ts":"2024-03-05T05:16:55Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T05:16:55Z","logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"management.example.com/v1alpha1","ownerKind":"Kong","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole"}
......
......
What did you see instead? Under which circumstances?
No CR pods are created and no new log are dump in helm operator pod log.
kong CR:
# oc get kong
NAME STATUS
gateway
pod list:
# oc get pod
kong-operator-7c5bc-2hql7 1/1 Running 0 60m
helm-operator pod log:
{"level":"info","ts":"2024-03-05T06:40:07Z","logger":"cmd","msg":"Version","Go Version":"go1.21.5","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.34.0-dirty","commit":"4e01bcd726aa8b0e092fcd3ab874961e276f3db3"}
{"level":"info","ts":"2024-03-05T06:40:07Z","logger":"cmd","msg":"Environment variable OPERATOR_NAME has been deprecated, use --leader-election-id instead."}
{"level":"info","ts":"2024-03-05T06:40:07Z","logger":"cmd","msg":"Watching namespaces","namespaces":["kong"]}
2024/03/05 06:40:07 Warning: Dependencies are handled in Chart.yaml since apiVersion "v2". We recommend migrating dependencies to Chart.yaml.
{"level":"info","ts":"2024-03-05T06:40:07Z","logger":"helm.controller","msg":"Watching resource","apiVersion":"management.example.com/v1alpha1","kind":"Kong","reconcilePeriod":"1m0s"}
{"level":"info","ts":"2024-03-05T06:40:07Z","logger":"controller-runtime.metrics","msg":"Starting metrics server"}
{"level":"info","ts":"2024-03-05T06:40:07Z","msg":"starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":"2024-03-05T06:40:07Z","logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":":8080","secure":false}
{"level":"info","ts":"2024-03-05T06:40:07Z","msg":"Starting EventSource","controller":"kong-controller","source":"kind source: *unstructured.Unstructured"}
{"level":"info","ts":"2024-03-05T06:40:07Z","msg":"Starting Controller","controller":"kong-controller"}
{"level":"info","ts":"2024-03-05T06:40:07Z","msg":"Starting workers","controller":"kong-controller","worker count":16}
Environment
Operator type:
Kubernetes cluster type:
$ operator-sdk version
$ go version
(if language is Go)
go: 1.21.5.
$ kubectl version
oc $v
Client Version: 4.12.18
Kustomize Version: v4.5.7
Server Version: 4.12.47
Kubernetes Version: v1.25.16+5c97f5b
Possible Solution
Additional context
- Relates: #6689
@lihongbj 1.34.0
's release did not complete fully. Can you try updating to 1.34.1
to see if this resolves your issue?
@acornett21 , I have tried with v1.34.1
and the issue is still reproduced with same symptom.