Hybrid Cloud demo: Quarkus backends distributed in different OpenShift clusters installed on public clouds, controlled by Skupper and consumed by a Frontend.
You find the guide on how to install such clusters inside installation dir.
# Windows
curl -fL https://github.com/skupperproject/skupper/releases/download/0.7.0/skupper-cli-0.7.0-windows-amd64.zip
unzip skupper-cli-0.7.0-windows-amd64.zip
mkdir %UserProfile%\bin
move skupper.exe %UserProfile%\bin
set PATH=%PATH%;%UserProfile%\bin
# Linux
curl -fL https://github.com/skupperproject/skupper/releases/download/0.7.0/skupper-cli-0.7.0-linux-amd64.tgz | tar -xzf -
mkdir $HOME/bin
mv skupper $HOME/bin
export PATH=$PATH:$HOME/bin
# MacOs
curl -fL https://github.com/skupperproject/skupper/releases/download/0.7.0/skupper-cli-0.7.0-mac-amd64.tgz | tar -xzf -
mkdir $HOME/bin
mv skupper $HOME/bin
export PATH=$PATH:$HOME/bin
Important
|
If you are using minikube as one cluster, run minikube tunnel in a new terminal.
|
As Kubernetes enables you to have cloud portability, the workload can be deployed on any Kubernetes context.
Deploy using the following two commands in any cluster and change only WORKER_CLOUD_ID
value to differentiate between them:
kubectl apply -f backend.yml
kubectl set env deployment/hybrid-cloud-backend WORKER_CLOUD_ID="[minikube|azr|gcp|aws]" #change it accordingly
Then you need to annotate the backend service so it is skupper-aware.
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
Deploy the frontend to your main cluster:
The Kubernetes service is of type LoadBalancer
.
If you are deploying the frontend in your public cluster open frontend.yml
file and modify Ingress configuration with your host:
spec:
rules:
- host: ""
Tip
|
In case of OpenShift you can run: oc expose service hybrid-cloud-frontend after deploying frontend resource, and it is not required to modify the Ingress configuration. But of course, the first approach works as well in OpenShift.
|
Deploy the frontend by calling:
kubectl apply -f frontend.yml
In your main cluster, init skupper
and create the connection-token
:
skupper init --console-auth unsecured # (1)
Skupper is now installed in namespace 'default'. Use 'skupper status' to get more information.
skupper status
Skupper is installed in namespace '"default"'. Status pending...
-
This makes anyone be able to access the Skupper UI to visualize the clouds. Fine for demos, not to be used in production.
See the status of the skupper pods. It takes a bit of time (usually around 2 minutes) until the pods are running:
kubectl get pods
NAME READY STATUS RESTARTS AGE
hybrid-cloud-backend-5948955b7-m5dzk 1/1 Running 0 16m
hybrid-cloud-backend-proxy-89f44c75-vx9kn 1/1 Running 0 51s
hybrid-cloud-frontend-7c85f9dfff-fmbd6 1/1 Running 0 10m
skupper-proxy-controller-77f6c568cc-f8xxd 1/1 Running 0 102s
skupper-router-7976948d9f-hn6ms 1/1 Running 0 104s
Finally create a token:
skupper token create token.yaml -t cert Connection token written to token.yaml
In all the other clusters, use the connection token created in the previous step:
skupper init
skupper link create token.yaml
Everything is connected and ready to be used. This has been the short-version to get started, continue reading if you want to learn how to build the Docker images, deply them , etc.
If you run:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hybrid-cloud-backend ClusterIP 172.30.32.251 <none> 8080/TCP 40m
hybrid-cloud-frontend LoadBalancer 172.30.25.65 add076df5798711eaad1a0241cddbab7-1371911574.eu-central-1.elb.amazonaws.com 8080:32647/TCP 39m
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 71m
openshift ExternalName <none> kubernetes.default.svc.cluster.local <none> 70m
skupper-controller ClusterIP 172.30.247.104 <none> 8080/TCP 34m
skupper-internal ClusterIP 172.30.109.84 <none> 55671/TCP,45671/TCP 34m
skupper-messaging ClusterIP 172.30.64.245 <none> 5671/TCP 34m
You’ll notice that there is a skupper-controller
service which is the entry point for the Skupper UI.
Expose this service so it is reachable from outside the cluster and you’ll be able to access the Skupper UI.
If you want to build, push and deploy the service:
cd backend
./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure
If service is already pushed in quay.io, so you can skip the push part:
cd backend
./mvnw clean package -DskipTests -Pazure -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=false
If you want to build, push and deploy the service:
cd backend
./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure -Dquarkus.kubernetes.host=<your_public_host>
If service is already pushed in quay.io, so you can skip the push part:
cd backend
./mvnw clean package -DskipTests -Pazr -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=false
The next profiles are provided: -Pazr
, -Paws
, -Pgcp
and -Plocal
, this just sets an environment variable to identify the cluster.
Make sure you have a least the backend
project deployed on 2 different clusters. The frontend
project can be deployed to just one cluster.
Here, we will make the assumption that we have it deployed in a local cluster local and a public cluster public.
Make sure to have 2 terminals with separate sessions logged into each of your cluster with the correct namespace context (but within the same folder).
Follow the instructions provided here.
-
In your public terminal session :
skupper init --id public
skupper connection-token private-to-public.yaml
-
In your local terminal session :
skupper init --id private
skupper connect private-to-public.yaml
-
In the terminal for the local cluster, annotate the hybrid-cloud-backend service:
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
-
In the terminal for the public cluster, annotate the hybrid-cloud-backend service:
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
Both services are now connected, if you scale one to 0 or it gets overloaded it will transparently load-balance to the other cluster.