This repo contains static configuration specific to a "managed" OpenShift Dedicated (OSD) cluster.
https://issues.redhat.com/browse/SDE-2786 has change the repo slightly: /deploy holds the sources of truth, and /generated_deploy holds the configurations that will be applied by Hive.
To add a new SelectorSyncSet, add your yaml manifest to the deploy
dir, then run the make
command.
Alternatively you can enable GitHub Actions on your fork and make
will be ran automatically. Additionally,
the action will create a new commit with the generated files.
To add an ACM (Governance) Policy
- If the manifest of the object you want to convert to policy already exists in
deploy
: in the object config.yaml, add a fieldpolicy:
destination: "acm-policies"` (example: https://github.com/openshift/managed-cluster-config/blob/master/deploy/backplane/cee/config.yaml) - If the manifest of the object does not exist: add your manifests with a config.yaml file. If you only want this object to be deployed as Policy, see this example
make
will look for config.yaml
files, runs it with the PolicyGenerator binary and save the output to generated_deploy/acm-policies
directory. make
will then automatically
add the policy as a new SelectorySyncSet.
- oyaml:
pip install oyaml
All resources in generated_deploy/
are bundled into a template that is used by config management to apply to target "hive" clusters. The configuration for this supports two options for deployment. They can be deployed in the template so they are:
- deployed directly to the "hive" cluster
- deployed to the "hive" cluster inside a SelectorSyncSet
Direct deployment (#1) supports resources that are not synced down to OSD clusters. SelectorSyncSet deployment (#2) supports resoures that are synced down to OSD clusters. Each are explained in detail here. The general configuration is managed in a config.yaml
file in each deploy directory. Key things of note:
- This file is now mandatory in the scope of OSD-15267 and have been added to all folders. In case it is not define,
make
will fail
+ scripts/generate_template.py -t scripts/templates/ -y deploy -d /Users/bdematte/git/managed-cluster-config/hack/ -r managed-cluster-config
ERROR : Missing config.yaml for resource defined in deploy/acm-policies
Some config.yaml files are missing, exiting...
make: *** [generate-hive-templates] Error 1
- Configuration is not inherited by sub-directories! Every (EVERY) directory in the
deploy/
hierarchy must define aconfig.yaml
file.
You must specify a deploymentMode
property in config.yaml
.
deploymentMode
(optional, default ="SelectorSyncSet"
) - either "Direct" or "SelectorSyncSet".
You must specify the environments
where the resource is deployed. There is no default set of environments. It is a child of the top level direct
property.
environments
(required, no default) - manages what environments the resources are deployed into. Valid values are any of"integration"
,"stage"
, and"production"
.
Example to deploy only to all environments:
deploymentMode: "Direct"
direct:
environments: ["integration", "stage", "production"]
Example to deploy only to integration and stage:
deploymentMode: "Direct"
direct:
environments: ["integration", "stage"]
In the config.yaml
file you define a top level property selectorSyncSet
. Within this configuration is supported for matchLabels
, matchExpressions
, matchLabelsApplyMode
, resourceApplyMode
and applyBehavior
.
matchLabels
(optional, default:{}
) - adds additionalmatchLabels
conditions to the SelectorSyncSet'sclusterDeploymentSelector
matchExpressions
(optional, default:[]
) - addsmatchExpressions
conditions to the SelectoSyncSet'sclusterDeploymentSelector
resourceApplyMode
(optional, default:"Sync"
) - sets the SelectorSyncSet'sresourceApplyMode
matchLabelsApplyMode
(optional, default:"AND"
) - When set as"OR"
generates a separate SSS permatchLabels
conditions. Default behavior creates a single SSS with allmatchLabels
conditions. This is to tackle a situation where we want to apply configuration for one of many label conditions.applyBehavior
(optional, default: None, see hive default) - sets the SelectorSyncSet'sapplyBehavior
You can also define a top level property policy
to specify the behaviour of ./scripts/generate-policy-config.py
for the resource. Supported sub-properties :
complianceType
(optional, default:"mustonlyhave"
, see operator values - select the compliance type for the policy when used by./scripts/generate-policy-config.py
)metadataComplianceType
(optional, default:"musthave"
, see operator values - select the compliance type for metadata for the policy when used by./scripts/generate-policy-config.py
)
Example to apply a directory for any of a set of label conditions using Upsert:
deploymentMode: "SelectorSyncSet"
selectorSyncSet:
matchLabels:
myAwesomeLabel: "some value"
someOtherLabel: "something else"
resourceApplyMode: "Upsert"
matchLabelsApplyMode: "OR"
policy:
complianceType: "mustonlyhave"
metadataComplianceType: "musthave"
A set of rules and alerts that SRE requires to ensure a cluster is functioning. There are two categories of rules and alerts found here:
- SRE specific, will never be part of OCP
- Temporary addition until made part of OCP
Persistent storage is configured using the configmap cluster-monitoring-config
, which is read by the cluster-monitoring-operator to generate PersistentVolumeClaims and attach them to the Prometheus and Alertmanager pods.
Initially OSD will support a subset of operators only. These are managed by patching the OCP shipped OperatorSource CRs. See deploy/osd-curated-operators
.
NOTE that ClusterVersion is being patched to add overrides. If other overrides are needed we'll have to tune how we do this patching. It must be done along with the OperatorSource patching to ensure CVO doesn't revert the OperatorSource patching.
In OSD, managed-cluster-config sets a key named branding
to dedicated
in the Console operator. This value is in turn read by code that applies the logo and other branding elements predefined for that value.
Docs TBA.
Refer to deploy/resource-quotas/README.md.
Docs TBA.
pyyaml
There are additional scripts in this repo as a holding place for a better place or a better solution / process.