Host Operator
Build
Requires Go version 1.14 - download for your development environment here.
This repository uses Go modules. You may need to export GO111MODULE=on
to turn modules support "on".
Development
The dev.mk targets in the toolchain-e2e repository can be used to build and deploy the host and member operators for development, or use the guide- https://github.com/codeready-toolchain/toolchain-e2e/blob/master/dev_install.adoc
OLM catalog files
There is one operator bundle stored in ./deploy/olm-catalog/toolchain-host-operator/ directory that is used as a base template for generating all new versions of the operator bundle.
The CSV that is part of the bundle gathers information from multiple files inside of this repository, so make sure that it’s still in sync. Every time when any of the following files is changed, run make generate-olm-files
to update the whole operator bundle as well as the hack files to the latest version:
-
role.yaml
-
cluster_role.yaml
-
the actual CSV
-
any of the CRDs
-
any of the CR examples
Installing operator
To install the host operator via OperatorHub you need to have OpenShift 4.2+ running and access to a docker registry.
Since the operator is no available in OperatorHub nor in any registry by default, you need to deploy the image to the docker registry, create CatalogSource
and ClusterServiceVersion
in the openshift-marketplace
namespace and then create OperatorGroup
and Subscription
in the namespace you want to install the operator to.
Before running any make target, make sure that you have QUAY_NAMESPACE
variable set to your quay username (or set to any namespace you want to push the image to).
$ export QUAY_NAMESPACE=<quay-username>
Prerequisites:
-
Make sure the target OpenShift 4.2+ cluster is accessible via
oc
command. -
Log in to the target OpenShift cluster with cluster admin privileges
-
Set the
QUAY_NAMESPACE
variable properly - see above -
Login to quay.io via
docker login quay.io
(in case you want to use quay as the docker registry)
Then, to install the operator run:
$ make install-operator
Note
|
The first push to quay will create host-operator repository that is private by default, so go to https://quay.io/repository/<your-username>/host-operator?tab=settings and set the repository visibility to public
|
That Makefile target takes care of several steps that can be executed separately:
-
build the image:
$ make docker-image
-
push the image to registry:
$ make docker-push
-
create
CatalogSource
andConfigMap
withClusterServiceVersion
and all CRDs in theopenshift-marketplace
:$ make deploy-csv
-
and as the last step the actual installation via creating
OperatorGroup
andSubscription
in the test namespace.
Releasing operator
The releases of the operator are automatically managed via GitHub Actions workflow defined in this repository.
Broken release
If there is any broken release that cannot be built & pushed through the pipeline - for example because of this error:
Invalid bundle toolchain-host-operator.v0.0.321-147-commit-477c4b7-5e49228, bundle specifies a non-existent replacement toolchain-host-operator.v0.0.320-146-commit-f46a8aa-8f94bc0
then the release has to be fixed manually. In such a case, please follow these steps:
Note
|
If the problem occurred when releasing registration-service, then don’t do anything in the host-operator, but follow the steps in registration-service repo. |
-
Log in to quay.io using an account that has the write permissions in quay.io/codeready-toolchain/host-operator repo.
-
Checkout to the problematic (missing) commit that failed in the pipeline and that has to be manually released.
-
Run
make docker-push QUAY_NAMESPACE=codeready-toolchain
-
Run
make push-to-quay-staging QUAY_NAMESPACE=codeready-toolchain
OpenShift internal docker registry
In case you want to use the OpenShift internal docker registry instead of quay, you can achieve the same thing via running:
$ make install-operator-using-os-registry
In case you have issues with the certificate while logging/pushing to the OpenShift internal docker registry, please follow these instructions:
TO_REGISTRY=$(oc get images.config.openshift.io/cluster -o jsonpath={.status.externalRegistryHostnames[0]})
oc get secret router-certs-default -n openshift-ingress -o json |jq -r '.data["tls.crt"]' | base64 -d >ca.crt
sudo cp ca.crt /etc/pki/ca-trust/source/anchors/${TO_REGISTRY}.crt
sudo update-ca-trust enable
sudo systemctl daemon-reload
sudo systemctl restart docker
docker login -u kubeadmin -p $(oc whoami -t) ${TO_REGISTRY}
End-to-End tests
Background & pairing
E2E tests are not located in this repository - all e2e tests are in toolchain-e2e repo, however, it’s still possible to run them locally from this repo - see Running End-to-End Tests.
When there is a change introduced in this repository that should be either covered by e2e tests or requires changes in the already existing tests, then all needed changes should go into the toolchain-e2e repo. The logic that executes tests in openshift-ci automatically tries to pair PR opened for this (host-operator) repository with a branch that potentially exists in the developer’s fork of the toolchain-e2e repo. This pairing is based on a branch name.
For example, if a developer with GH account cooljohn
opens a PR (for host-operator repo) from a branch fix-reconcile
, then the logic checks if there is a branch fix-reconcile
also in the cooljohn/toolchain-e2e
fork.
If there is, then the logic:
-
clones latest changes from codeready-toolchain/toolchain-e2e
-
fetches the
fix-reconcile
branch fromcooljohn/toolchain-e2e
fork -
merges
master
branch with the changes fromfix-reconcile
branch -
clones latest changes from member-operator repo and builds & deploys the
member-operator
image out of it -
builds & deploys the
host-operator
image from the code that is in the PR -
runs e2e tests against both operators from the merged branch of the
toolchain-e2e
repo
If the branch with the same name does not exist, then it only clones the latest changes from toolchain-e2e and runs e2e tests from the master
.
If you still don’t know what to do with e2e tests in some use-cases, go to What to do section where all use-cases are covered.
Prerequisites if running locally
Minishift
If you are running this tests locally on minishift, make sure that you have exposed minishift’s docker-env, so that deployment can use locally built image. You can expose it by running following command.
eval $(minishift docker-env)
Note
|
This is not required for openshift-ci environment |
OpenShift 4.2+
-
Make sure you have set the
QUAY_NAMESPACE
variable:export QUAY_NAMESPACE=<quay-username>
-
Log in to the target OpenShift cluster with cluster admin privileges
-
The visibility of
host-operator
repository in quay is set to public (https://quay.io/repository/<your-username>/host-operator?tab=settings)
Running End-to-End Tests
Although the e2e tests are in the separated repository, it’s still possible to run them from this repo (host-operator) and also against the current code that is at HEAD. There are two Makefile targets that will execute the e2e tests:
-
make test-e2e
- this target clones latest changes from toolchain-e2e and runs e2e tests for both operators from the master. As deployment forhost-operator
it uses the current code that is at HEAD. -
make test-e2e-local
- this target doesn’t clone anything, but it runs run e2e tests for both operators from the directory../toolchain-e2e
. As deployment forhost-operator
it uses the current code that is at HEAD.
The tests executed within toolchain-e2e repo will take care of creating all needed namespaces with random names (or see below for enforcing some specific namespace names). It will also create all required CRDs, role and role bindings for the service accounts, build the Docker images for both operators and push them to the OpenShift container registry. Finally, it will deploy the operators and run the tests using the operator-sdk.
NOTE: you can override the default namespace names where the end-to-end tests are going to be executed - eg.: `make test-e2e HOST_NS=my-host MEMBER_NS=my-member` file.
What to do
If you are still confused by the e2e location, execution and branch pairing, see the following cases and needed steps:
-
Working locally:
-
Need to test your code using the latest version of e2e tests from toolchain-e2e repo:
-
execute
make test-e2e
-
-
Need to test your code using e2e tests located in
../toolchain-e2e
repo:-
make test-e2e-local
-
-
-
Creating a PR:
-
Your PR doesn’t need any changes in toolchain-e2e repo:
-
1. check the name of a branch you are going to create a PR for
-
2. make sure that your fork of toolchain-e2e repo doesn’t contain branch with the same name
-
3. create a PR
-
-
Your PR requires changes in toolchain-e2e repo:
-
1. check the name of a branch you are going to create a PR for
-
2. create a branch with the same name within your fork of toolchain-e2e repo and put all necessary changes there
-
3. push all changes into both forks of the repositories toolchain-e2e and host-operator
-
4. create a PR for host-operator
-
5. create a PR for toolchain-e2e
-
-
Verifying the OpenShift CI configuration
It's possible to verify the OpenShift CI config from the developer's laptop while all the jobs are executed on the remote, online CI platform:
-
checkout and build the CI Operator command line tool
-
login to https://console.svc.ci.openshift.org (via GH OAuth) and copy the login command (you may need to switch to the
application console
) -
login with the command aferementioned
-
run the CI jobs with
ci-operator --config ../../openshift/release/ci-operator/config/codeready-toolchain/host-operator/codeready-toolchain-host-operator-master.yaml --git-ref=codeready-toolchain/host-operator@master
assuming that the OpenShift Release repo was checked you.
Note
|
you can ignore the RBAC issues that are displayed in the console |
Adding cluster to SaaS
The CodeReady Toolchain architecture contains two types of clusters host
and member
.
To connect these two clusters together it is necessary to run a script add-cluster.sh that is part of the toolchain-common repository.
For more detailed information about the script see the README "Script add-cluster.sh" chapter.
There are two Makefile targets available in this repository that execute the script:
-
$ make add-member-to-host
that executes../toolchain-common/scripts/add-cluster.sh member member-cluster
-
$ make add-host-to-member
that executes../toolchain-common/scripts/add-cluster.sh host host-cluster
Note
|
In order to run them, you need to have the toolchain-common repository cloned to the same parent directory as this repository exists in. |