template subcommand does not produce the same output as install --dry-run --debug command
Opened this issue · 1 comments
Given a chart with dependencies gated by a condition in the parent chart requirements.yaml file, I find that the output of helm install --dry-run --debug ./app differs from helm template ./app. I don't know if this is actually a bug, or an acceptable deviation, but I thought it bears reporting. I would generally expect the dry-run output to match the template output.
Under install --dry-run --debug, the output correctly excludes the chart dependency. helm template incorrectly includes that dependency.
In the output below, this is the chart dependency in question, which consists of a set of three zookeeper Service objects. I require these services to be emitted by helm when the condition service-aliases.enabled is true.
There is no parent chart values.yaml file.
helm version
$ helm version
Client: &version.Version{SemVer:"v2.4.2", GitCommit:"82d8e9498d96535cc6787a6a9194a76161d29b4c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.4.2", GitCommit:"82d8e9498d96535cc6787a6a9194a76161d29b4c", GitTreeState:"clean"}
Parent requirements.yaml file
dependencies:
- name: zookeeper-service
repository: file://charts/zookeeper-service
version: 1.0.0
condition: service-aliases.enabled
The chart dependency
zookeeper-service (three objects lumped into one service.yaml file)
kind: Service
apiVersion: v1
metadata:
name: zookeeper01
namespace: super-server
spec:
type: ExternalName
externalName: zookeeper01.default.svc.cluster.local
ports:
- port: 2181
---
kind: Service
apiVersion: v1
metadata:
name: zookeeper02
namespace: super-server
spec:
type: ExternalName
externalName: zookeeper01.default.svc.cluster.local
ports:
- port: 2181
---
kind: Service
apiVersion: v1
metadata:
name: zookeeper03
namespace: super-server
spec:
type: ExternalName
externalName: zookeeper01.default.svc.cluster.local
ports:
- port: 2181
install --dry-run --debug output
$ helm install --dry-run --debug --set service-aliases.enabled=false super-server
[debug] Created tunnel using local port: '53575'
[debug] SERVER: "localhost:53575"
[debug] Original chart version: ""
[debug] CHART PATH: /Users/mpetrovic/Projects/acme/inf/super-server/super-server
NAME: tan-jackal
REVISION: 1
RELEASED: Wed Jun 7 16:05:18 2017
CHART: super-server-1.0.0
USER-SUPPLIED VALUES:
service-aliases:
enabled: false
COMPUTED VALUES:
service-aliases:
enabled: false
HOOKS:
MANIFEST:
---
# Source: super-server/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: super-server
namespace: super-server
spec:
ports:
- name: super-server
port: 28080
protocol: TCP
targetPort: 28080
selector:
name: super-server
---
# Source: super-server/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: super-server
spec:
replicas: 3
selector:
matchLabels:
name: super-server
template:
metadata:
labels:
name: super-server
pool: auth-pool
name: super-server
spec:
containers:
- env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: PUBLICURL
value: http://$(POD_IP):28080
image: docker-registry.dev.acme.com/acme/super-server:2.16
imagePullPolicy: IfNotPresent
name: super-server
ports:
- containerPort: 28080
name: pool-port
resources:
limits:
cpu: 500m
memory: 1500Mi
requests:
cpu: 500m
memory: 1500Mi
helm template output
$ helm template --set service-aliases.enabled=false super-server
---
# Source: super-server/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: super-server
namespace: super-server
spec:
ports:
- name: super-server
port: 28080
protocol: TCP
targetPort: 28080
selector:
name: super-server
---
# Source: super-server/charts/zookeeper-service/templates/service.yaml
---
kind: Service
apiVersion: v1
metadata:
name: zookeeper01
namespace: super-server
spec:
type: ExternalName
externalName: zookeeper01.default.svc.cluster.local
ports:
- port: 2181
---
kind: Service
apiVersion: v1
metadata:
name: zookeeper02
namespace: super-server
spec:
type: ExternalName
externalName: zookeeper01.default.svc.cluster.local
ports:
- port: 2181
---
kind: Service
apiVersion: v1
metadata:
name: zookeeper03
namespace: super-server
spec:
type: ExternalName
externalName: zookeeper01.default.svc.cluster.local
ports:
- port: 2181
---
# Source: super-server/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: super-server
spec:
replicas: 3
selector:
matchLabels:
name: super-server
template:
metadata:
labels:
name: super-server
pool: super-pool
name: super-server
spec:
containers:
- env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: PUBLICURL
value: http://$(POD_IP):28080
image: docker-registry.dev.acme.com/acme/super-server:2.16
imagePullPolicy: IfNotPresent
name: super-server
ports:
- containerPort: 28080
name: pool-port
resources:
limits:
cpu: 500m
memory: 1500Mi
requests:
cpu: 500m
memory: 1500Mi
I'm seeing a similar issue.
Personally I think if there's any deviation between helm install --dry-run --debug ./app and helm template ./app then it removes the usefulness of this plugin entirely.