This repo contains static configuration specific to a "managed" OpenShift Dedicated (OSD) cluster.
To add a new SelectorSyncSet, add your yaml manifest to the deploy
dir, then run the make
command.
Alternatively you can enable GitHub Actions on your fork and make
will be ran automatically. Additionally,
the action will create a new commit with the generated files.
- oyaml:
pip install oyaml
All resources in deploy/
are bundled into a template that is used by config management to apply to target "hive" clusters. The configuration for this supports two options for deployment. They can be deployed in the template so they are:
- deployed directly to the "hive" cluster
- deployed to the "hive" cluster inside a SelectorSyncSet
Direct deployment (#1) supports resources that are not synced down to OSD clusters. SelectorSyncSet deployment (#2) supports resoures that are synced down to OSD clusters. Each are explained in detail here. The general configuration is managed in a config.yaml
file in each deploy directory. Key things of note:
- This file is optional! If not present it's assumed
deploymentMode
is"SelectorSyncSet"
with no additional configuration. - Configuration is not inherited by sub-directories! Every (EVERY) directory in the
deploy/
hierarchy must define aconfig.yaml
file.
You must specify a deploymentMode
property in config.yaml
.
deploymentMode
(optional, default ="SelectorSyncSet"
) - either "Direct" or "SelectorSyncSet".
You must specify the environments
where the resource is deployed. There is no default set of environments. It is a child of the top level direct
property.
environments
(required, no default) - manages what environments the resources are deployed into. Valid values are any of"integration"
,"stage"
, and"production"
.
Example to deploy only to all environments:
deploymentMode: "Direct"
direct:
environments: ["integration", "stage", "production"]
Example to deploy only to integration and stage:
deploymentMode: "Direct"
direct:
environments: ["integration", "stage"]
In the config.yaml
file you define a top level property selectorSyncSet
. Within this configuration is supported for matchLaels
, matchExpressions
, matchLabelsApplyMode
, resourceApplyMode
, and applyBehavior
.
matchLabels
(optional, default:{}
) - adds additionalmatchLabels
conditions to the SelectorSyncSet'sclusterDeploymentSelector
matchExpressions
(optional, default:[]
) - addsmatchExpressions
conditions to the SelectoSyncSet'sclusterDeploymentSelector
resourceApplyMode
(optional, default:"Sync"
) - sets the SelectorSyncSet'sresourceApplyMode
matchLabelsApplyMode
(optional, default:"AND"
) - When set as"OR"
generates a separate SSS permatchLabels
conditions. Default behavior creates a single SSS with allmatchLabels
conditions. This is to tackle a situation where we want to apply configuration for one of many label conditions.applyBehavior
(optional, default: None, see hive default) - sets the SelectorSyncSet'sapplyBehavior
Example to apply a directory for any of a set of label conditions using Upsert:
deploymentMode: "SelectorSyncSet"
selectorSyncSet:
matchLabels:
myAwesomeLabel: "some value"
someOtherLabel: "something else"
resourceApplyMode: "Upsert"
matchLabelsApplyMode: "OR"
A set of rules and alerts that SRE requires to ensure a cluster is functioning. There are two categories of rules and alerts found here:
- SRE specific, will never be part of OCP
- Temporary addition until made part of OCP
Persistent storage is configured using the configmap cluster-monitoring-config
, which is read by the cluster-monitoring-operator to generate PersistentVolumeClaims and attach them to the Prometheus and Alertmanager pods.
Instead of SRE having the cluster-admin
role, a new ClusterRole, osd-sre-admin
, is created with some permissions removed. The ClusterRole can be regenerated in the generate/sre-authorization
directory. The role is granted to SRE via osd-sre-admins
group.
To elevate privileges, SRE can add themselves to the group osd-sre-cluster-admins
, which is bound to the ClusterRole cluster-admin
. When this group is created and managed by Hive, all users are wiped because the SelectorSyncSet will always have users: null
. Therefore, SRE will get elevated privileges for a limited time.
Initially OSD will support a subset of operators only. These are managed by patching the OCP shipped OperatorSource CRs. See deploy/osd-curated-operators
.
NOTE that ClusterVersion is being patched to add overrides. If other overrides are needed we'll have to tune how we do this patching. It must be done along with the OperatorSource patching to ensure CVO doesn't revert the OperatorSource patching.
In OSD, managed-cluster-config sets a key named branding
to dedicated
in the Console operator. This value is in turn read by code that applies the logo and other branding elements predefined for that value.
Docs TBA.
Refer to deploy/resource/quotas/README.md.
Docs TBA.
Prepares the cluster for elasticsearch
and logging
operator installation and pre-configures curator to retain 2 days of indexes (1 day for operations).
To opt-in to logging, the customer must:
- install the
logging
operator - install the
elasticsearch
operator - create
ClusterLogging
CR inopenshift-logging
pyyaml
There are additional scripts in this repo as a holding place for a better place or a better solution / process.