error converting YAML to JSON
Closed this issue · 3 comments
Describe the bug
I'm deploying the helm chart using the terraform helm_release module. I downloaded the chart locally to a folder. When I try to deploy the chart I get the following error. I'm currently using version 2.8.1 of the chart.
YAML parse error on connaisseur/templates/deployment.yaml: error converting YAML to JSON: yaml: line 53: mapping values are not allowed in this context
Expected behavior
I expect connaisseur to install using a local chart copy.
Optional: To reproduce
Use https://github.com/aws-ia/terraform-aws-eks-blueprints-addons/blob/main/helm.tf
Use the following params for the above resource
name = "connaisseur"
chart = "../../charts/connaisseur/${var.connaisseur_version}/connaisseur"
repository = ""
namespace = "connaisseur"
values = [
"${data.template_file.connaisseur_helm_chart_values[0].rendered}"
]
atomic = true
Thanks
Hi @samispurs,
unfortunately, I wasn't able to reproduce your issue. You said you were using chart version v2.8.1, however, there is no such chart version, only the application version 2.8.1 which was released with the v1.6.1 chart, so I am assuming you are using this, correct?
In application version 2.8.1 there is no mapping in line 53 of the deployment.yaml
, so I was wondering whether you could run helm template
with the chart that you have locally and share the relevant section of the rendered deployment.yaml
such that we can have a closer look.
Hi @annekebr , yes my bad you are correct the chart version is v1.61. The following is the result from helm template. Thanks!
---
# Source: connaisseur/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: connaisseur-deployment
namespace: default
labels:
app.kubernetes.io/name: connaisseur
helm.sh/chart: connaisseur-1.6.1
app.kubernetes.io/instance: connaisseur
app.kubernetes.io/managed-by: Helm
annotations:
checksum/config: f70e68a76a3f85438328da9216780fd68cdc665a3d663a2942c51cb058ad7f35
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: connaisseur
app.kubernetes.io/instance: connaisseur
template:
metadata:
labels:
app.kubernetes.io/name: connaisseur
app.kubernetes.io/instance: connaisseur
annotations:
checksum/config: f70e68a76a3f85438328da9216780fd68cdc665a3d663a2942c51cb058ad7f35
spec:
serviceAccountName: connaisseur-serviceaccount
containers:
- name: connaisseur
image: "docker.io/securesystemsengineering/connaisseur:v2.8.1"
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /health
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 5000
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: connaisseur-certs
mountPath: /app/certs
readOnly: true
- name: connaisseur-config
mountPath: /app/connaisseur-config/config.yaml
subPath: config.yaml
readOnly: true
- name: connaisseur-config-sigstore
mountPath: /app/.sigstore
readOnly: false
envFrom:
- configMapRef:
name: connaisseur-env
- secretRef:
name: connaisseur-env-secrets
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
limits:
cpu: 1000m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 20001
runAsNonRoot: true
runAsUser: 10001
seccompProfile:
type: RuntimeDefault
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
values:
- connaisseur
topologyKey: kubernetes.io/hostname
weight: 100
volumes:
- name: connaisseur-certs
secret:
secretName: connaisseur-tls
- name: connaisseur-config
configMap:
name: connaisseur-config
- name: connaisseur-config-sigstore
emptyDir: {}
---
I don't get this behavior with the most current version v3.3.0, and will just use that version.