operator-sdk doesn't honor the dependencies of sub-charts of the child charts
Closed this issue · 17 comments
Bug Report
In Continuation of Issue.
Bug Demo Repository: https://github.com/r4rajat/operator-sdk-bug-2
As per suggestion, if we remove dependencies from parent chart in this case from hello-world
chart, and disable sub-charts i.e alertmanager
, kube-state-metrics
, prometheus-node-exporter
and prometheus-pushgateway
from the child chart i.e prometheus
: And Install the Operator and operand, we could see the deployments of disabled sub-charts
What did you do?
$ git clone https://github.com/r4rajat/operator-sdk-bug-2
$ cd ./operator-sdk-bug-2
$ make docker-build docker-push
$ operator-sdk olm install
$ make bundle bundle-build bundle-push
$ operator-sdk run bundle docker.io/r4rajat/hello-world-operator-bundle:v1.0.1 --timeout 15m
$ kubectl apply -f ./config/samples/charts_v1alpha1_helloworld.yaml
$ oc get pods
NAME READY STATUS RESTARTS AGE
570b6af36fc4e01e3a860f8f0bef662c8c5f3eea2b342c1913d16c8d2bdw4dq 0/1 Completed 0 42m
docker-io-r4rajat-hello-world-operator-bundle-v1-0-1 1/1 Running 0 42m
hello-world-operator-controller-manager-7f4d65b9dd-gjmlq 1/2 CrashLoopBackOff 11 (3m55s ago) 41m
helloworld-sample-alertmanager-0 1/1 Running 0 39m
helloworld-sample-hello-world-67976bff97-fk8n5 0/1 CrashLoopBackOff 12 (2m47s ago) 39m
helloworld-sample-kube-state-metrics-6488696777-vz6tw 1/1 Running 0 39m
helloworld-sample-prometheus-node-exporter-4l59t 0/1 Pending 0 39m
helloworld-sample-prometheus-node-exporter-672cb 0/1 Pending 0 39m
helloworld-sample-prometheus-node-exporter-cq8cr 0/1 Pending 0 39m
helloworld-sample-prometheus-node-exporter-fjpkp 0/1 Pending 0 39m
helloworld-sample-prometheus-node-exporter-mhhs6 0/1 Pending 0 39m
helloworld-sample-prometheus-node-exporter-pwk8l 0/1 Pending 0 39m
helloworld-sample-prometheus-pushgateway-59c4d48896-rqqwp 1/1 Running 0 39m
helloworld-sample-prometheus-server-54bc6978dd-mrsr5 2/2 Running 0 39m
mysql-0 1/1 Running 0 3h7m
What did you expect to see?
Deployments/Pods for disabled sub-charts should not install
What did you see instead? Under which circumstances?
Deployments/Pods for disabled sub-charts are installed
Environment
Operator type:
/language helm
Kubernetes cluster type:
The type of cluster used for testing/deployment: OpenShift v4.12.25
$ operator-sdk version
operator-sdk version: "v1.31.0", commit: "e67da35ef4fff3e471a208904b2a142b27ae32b1", kubernetes version: "1.26.0", go version: "go1.19.11", GOOS: "linux", GOARCH: "amd64"
$ go version
(if language is Go)
go version go1.21.1 linux/amd64
$ kubectl version
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.25.11+1485cc9
WARNING: version difference between client (1.28) and server (1.25) exceeds the supported minor version skew of +/-1
Possible Solution
Additional context
Ok, so making sure I understand the specific details here:
- We have 3 levels of charts
a. Level 1 -hello-world
b. Level 2 -prometheus
c. Level 3 -alertmanager
,kube-state-metrics
,prometheus-node-exporter
,prometheus-pushgateway
- When you create a CR for
hello-world
, you expect that theprometheus
chart is always enabled as a dependency ofhello-world
and is always installed as part ofhello-world
- You expect that, by default, none of the Level 3 charts are installed, because they:
a. Have a<name>.enabled
condition on their dependency declaration
b. Are all explicitlyenabled: false
in the defaultvalues.yaml
for the level 2 prometheus operator.
And what you're actually seeing is that the level 3 operators are always installed.
Is the above correct?
Yes @joelanford
If we look into it, it is expected behavior of helm . If we package the chart and install it
$ helm package helm/hello-world/
$ helm install hello-world-0.1.0.tgz --generate-name
we could see the disabled 3rd level charts installed as well
Ok, I think I have a working fix for your chart. I tested locally and it seemed to work:
- Add the local
prometheus
dependency back in to your roothello-world
Chart.yaml. - However, do not use
charts
for the local file path. Use a different directory name or location. The example from the helm docs is:
dependencies:
- name: nginx
version: "1.2.3"
repository: "file://../dependency_chart/nginx"
I'll take a look at it
@joelanford, I tried this method, I've moved the prometheus
chart to helm/dependency_chart/
and updated the root Chart.yaml
as
dependencies:
- name: prometheus
version: 24.4.0
repository: "file://../dependency_chart/prometheus"
And ran
$ operator-sdk init \
--project-name=hello-world-operator --plugins=helm --domain=dev \
--helm-chart=./helm/hello-world --verbose
And again getting the error
FATA[0000] failed to create API: unable to scaffold with "base.helm.sdk.operatorframework.io/v1": failed to fetch chart dependencies: directory /home/r4rajat/Projects/operator-sdk-bug-2/helm-charts/dependency_chart/prometheus not found
Error: failed to initialize project: unable to run post-scaffold tasks of "base.helm.sdk.operatorframework.io/v1": exit status 1
Interesting. I'm seeing the same issue when using ../depedency_chart
. However if I change ../dependency_chart
to ./dependency-chart
(and move that directory to helm/hello-world/dependency_chart
), it works. Not sure if this is an issue with SDK or helm.
Looks like this needs more investigation. Adding this to backlog for now.
Hi @joelanford , it seems to be working for us. Thanks for the solution
With this approach, there are 2 observations which I've made
- If we want to install the chart using
helm install
, we have to build the dependency first usinghelm dependency build .
as helm expects the dependencies to be incharts
folder. operator-sdk init
command works without building the dependencies.
Great! Sounds like we can close this out?
Hey @joelanford , Let's not close it. There is still an issue which we're facing that is.
Now we have two copies of the same chart in our helm directory.
Let's say if we want to publish our chart. We need to do following steps
# helm dependency build will create a charts folder with the compressed dependency i.e prometheus
$ helm dependency build helm/hello-world
# helm package helm/hello-world
# helm push hello-world-v0.1.0.tar.gz
Now we have charts
and dependecy_chart
folder with same dependencies
I don't think I would consider this a bug. The dependencies
section of the Chart.yaml
file is intended to be the way to define how to pull in and populate the charts
directory of your chart.
Ultimately, I think there are two separate things at play here, that together present the problem you're having:
- You must declare a dependency via
Chart.yaml
if you want to be able to enable/disable it based on values. - You can't declare (or at least there are issues declaring?) a local dependency via
Chart.yaml
where the source path is in the chart's./charts
directory.
There's an issue in the helm repo I filed a few years ago that I'm pretty sure is related to this area: helm/helm#8120. The stalebot closed it out, so as far as I know, nothing was ever done to fix it.
Anything we do here (even if we could) would just be a workaround to the underlying issues in helm. I'd prefer we pursue a fix in helm that we could then incorporate here.
Hello everyone
Does this help the problem you are facing, @r4rajat #4689 (comment)
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.