openshift/openshift-ansible

Should an openshift_logging_elasticsearch_pvc_dynamic value be passed in, here?

Closed this issue · 5 comments

When setting up storage for Openshift Logging, the openshift-logging role's install_logging.yml task doesn't pass openshfit_logging_elasticsearch_pvc_dynamic to the openshift_logging_elasticsearch role:

This causes Ansible to create pvc templates without storage class names in them, like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: logging-es-0
  labels:
    logging-infra: support
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

You can see this behavior in

# logging_elasticsearch_pvc.module_results.results | length > 0 returns a false positive
# so we check for the presence of 'stderr' to determine if the obj exists or not
# the RC for existing and not existing is both 0
- when:
- logging_elasticsearch_pvc.module_results.stderr is defined
- openshift_logging_elasticsearch_storage_type == "pvc"
block:
# storageclasses are used by default but if static then disable
# storageclasses with the storageClassName set to "" in pvc.j2
- name: Creating ES storage template - static
template:
src: "pvc.j2"
dest: "{{ tempdir }}/templates/logging-es-pvc.yml"
vars:
obj_name: "{{ openshift_logging_elasticsearch_pvc_name }}"
size: "{{ (openshift_logging_elasticsearch_pvc_size | trim | length == 0) | ternary('10Gi', openshift_logging_elasticsearch_pvc_size) }}"
access_modes: "{{ openshift_logging_elasticsearch_pvc_access_modes | list }}"
pv_selector: "{{ openshift_logging_elasticsearch_pvc_pv_selector }}"
when:
- not openshift_logging_elasticsearch_pvc_dynamic | bool
# Storageclasses are used by default if configured
- name: Creating ES storage template - dynamic
template:
src: "pvc.j2"
dest: "{{ tempdir }}/templates/logging-es-pvc.yml"
vars:
obj_name: "{{ openshift_logging_elasticsearch_pvc_name }}"
size: "{{ (openshift_logging_elasticsearch_pvc_size | trim | length == 0) | ternary('10Gi', openshift_logging_elasticsearch_pvc_size) }}"
access_modes: "{{ openshift_logging_elasticsearch_pvc_access_modes | list }}"
pv_selector: "{{ openshift_logging_elasticsearch_pvc_pv_selector }}"
storage_class_name: "{{ openshift_logging_elasticsearch_pvc_storage_class_name | default('', true) }}"
when:
- openshift_logging_elasticsearch_pvc_dynamic | bool

By contrast, when creating an ops ES cluster (our app logging cluster is split from our ops logging cluster), the openshift_logging_elasticsearch_pvc_dynamic parameter is passed into the openshift_logging_elasticsearch role, here:

openshift_logging_elasticsearch_pvc_dynamic: "{{ openshift_logging_es_ops_pvc_dynamic }}"

And that results in a properly formatted PVC template:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: logging-es-ops-1
  labels:
    logging-infra: support
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: glusterfs-registry-block

My intention is to set up a block-backed volume storage class for "infrastructure" storage, as that's what the Openshift documentation recommends, and then leave file-backed volume storage as the default storage class. The only way to work around this is to set infra-related block storage as default, and then change the default post-openshift-install.

If this change is appropriate, I can submit a pull request and possibly work on https://bugzilla.redhat.com/show_bug.cgi?id=1703239#, too.

I've just run into this issue on 3.11.157

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.