question: how to deal with configmap + json + jinja templating
Sispheor opened this issue · 5 comments
Hello, just a question. I have a config map to send. This last use json.
Here is my task:
- name: Front config map
k8s:
host: "{{ oc_host }}"
api_key: "{{ oc_token }}"
verify_ssl: no
state: present
namespace: "{{ project_name }}"
definition:
kind: ConfigMap
apiVersion: v1
metadata:
name: "{{ frontend_configmap_name }}"
data:
configuration.json: |-
{
"url_back_end": "{{ frontend_application_url }}",
"registration_sla_link_url": "{{ registration_sla_link_url }}",
"registration_security_url": "{{ registration_security_url }}",
"registration_partnership_contracts_url": "{{ registration_partnership_contract_url }}",
"registration_legal_logs_CNIL_url": "{{ registration_legal_logs_cnil_url }}"
}
And the output of the module
TASK [frontend : Front config map] ****************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "error": 400, "msg": "Failed to create object: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"ConfigMap in version \\\"v1\\\" cannot be handled as a ConfigMap: [pos 54]: json: expect char '\\\"' but got char '{'\",\"reason\":\"BadRequest\",\"code\":400}\n", "reason": "Bad Request", "status": 400}
Maybe there is a good practice to handle this. If I remove jinja parts it works.
Hi,
Put the resource definition in a jinja2 templated file and use definition: "{{ lookup('template', 'nameofdir/nameof-file.yml') }}" insted of inline resource definition in your playbook, Remove all the quotes from variables in that file.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.