-
requirements:
-
define the following env vars:
-
COS_BASE_PATH → base URL for the managed connector service control plane
-
KAS_BASE_PATH → base URL for the managed kafka service control plane
TipI use direnv with the following set-up
export OCM_CONFIG=$PWD/.ocm.json export KUBECONFIG=$PWD/.kube/config export COS_BASE_PATH=https://cos-fleet-manager-cos.rh-fuse-153f1de160110098c1928a6c05e19444-0000.eu-de.containers.appdomain.cloud export KAS_BASE_PATH=https://api.openshift.com
-
-
retrieve your ocm-offline-token from https://qaprodauth.cloud.redhat.com/openshift/token using the _kafka_supporting account
-
follow the steps on that page to download and install the ocm command-line tool, then run the ocm login with the provided token
-
Note
|
This is an example installation that consists in:
|
-
set-up minikube
# you may need to tune cpus and memory depending on your laptop config minikube start --profile cos --cpus=4 --memory=4096
-
install camel-k 1.8.x
This will install Camel K in the running cluster, needed to create connectors based on it, so make sure the
minikube start
was successful.kamel install --olm=false --skip-registry-setup
-
install latest strimzi
This will install strimzi in the running cluster, needed to create connectors based on Debezium, so make sure the
minikube start
was successful.kubectl apply -f 'https://strimzi.io/install/latest?namespace=default'
-
install images
This step is only necessary if you want to run everything inside the cluster. For developing purposes, if you want to run the synchronizer and operators with
quarkus:dev
, that’s not needed.eval $(minikube --profile cos docker-env) ./mvnw clean install -DskipTests=true -Pcontainer-build
-
configure pull secret
In order to use private image on quay a pull secret need to be crated:
-
copy the content of rhoas-pull-docker to a local file
-
create a pull secret:
kubectl create secret generic addon-pullsecret \ --from-file=${path of rhoas-pull-docker} \ --type=kubernetes.io/dockerconfigjson
-
-
install CRDs
This will install the CRDs for Managed Connectors, it’s Operators and Clusters.
./etc/scripts/deploy_fleetshard_crds.sh
-
install operators and sync
kubectl apply -k etc/kubernetes/operator-camel/local kubectl apply -k etc/kubernetes/operator-debezium/local kubectl apply -k etc/kubernetes/sync/local
At this point, operators and sync are deployed, but they are not running as replica is set to 0 by default because some resources have to be configured.
➜ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE camel-k-operator 1/1 1 1 2d3h cos-fleetshard-operator-camel 0/0 0 0 6s cos-fleetshard-operator-debezium 0/0 0 0 5s cos-fleetshard-sync 0/0 0 0 4s strimzi-cluster-operator 1/1 1 1 2d3h
-
create cluster and configure secrets
This section expects you to have cos-tools/bin in your PATH, but you may also just run the scripts from inside the bin directory.
NoteThis creates a new cluster for you on the fleet manager, remember to delete it once done.
SUFFIX=$(uuidgen | tr -d '-') create-cluster-secret $(create-cluster "$USER-$SUFFIX" | jq -r '.id') cos-fleetshard-sync-config
When you’re done you may query for created clusters with
get-clusters
and delete it withdelete-clusters <cluster id>
. -
scale deployments
kubectl scale deployment -l "app.kubernetes.io/part-of=cos" --replicas=1
Note
|
Although this section expects you to use a completely new kubernetes cluster, you may also just stop |
-
set-up minikube
# you may need to tune this command minikube start --profile cos-testing
-
install CRDs
# install custom resources ./etc/scripts/deploy_fleetshard_crds.sh ./etc/scripts/deploy_camel-k_crds.sh ./etc/scripts/deploy_strimzi_crds.sh
-
run tests
./mvnw clean install