Kubernetes Operator for easy setup and management of Apicurio Studio instances.
For development or on bare OpenShift and Kubernetes clusters, without Operator Lifecycle Management (OLM).
Start cloning this repos and then, optionally, create a new project:
$ git clone https://github.com/apicurio/apicurio-studio-operator.git
$ cd apicurio-studio-operator/
$ kubectl create namespace apicurio
Then, from this repository root directory, create the specific CRDS and resources needed for Operator:
kubectl create -f deploy/crd/apicuriostudios.studio.apicur.io-v1.yml
kubectl create -f deploy/service_account.yaml -n apicurio
kubectl create -f deploy/role.yaml -n apicurio
kubectl create -f deploy/role_binding.yaml -n apicurio
Finally, deploy the operator:
kubectl create -f deploy/operator.yaml -n apicurio
Wait a minute or two and check everything is running:
$ kubectl get pods -n apicurio
NAME READY STATUS RESTARTS AGE
apicurio-studio-operator-76d47d899f-2tzzm 1/1 Running 0 1m
Now just create a ApicurioStudio
CRD!
Operator Lifecycle Manager should be installed on your cluster first. Please follow this guideline to know how to proceed.
You can then use the OperatorHub.io catalog of Kubernetes Operators sourced from multiple providers. It offers you an alternative way to install stable versions of Microcks using the Microcks Operator. To install Microcks from OperatorHub.io, locate the Apicurio Studio Operator and follow the instructions provided.
As an alternative, raw resources can also be found into the /deploy/olm
directory of this repo.
Here's below a minimalistic ApicurioStudio
CRD that I use on my OpenShift cluster. This let all the defaults applies (see below for details).
apiVersion: studio.apicur.io/v1alpha1
kind: ApicurioStudio
metadata:
name: apicurio-sample
spec:
name: apicurio-sample
This form can only be used on OpenShift as vanilla Kubernetes will need more information to customize
Ingress
resources.
Here's now a complete ApicurioStudio
CRD that I use - for example - on OpenShift.
apiVersion: studio.apicur.io/v1alpha1
kind: ApicurioStudio
metadata:
name: apicurio-sample
spec:
name: apicurio-sample
apiModule:
image: apicurio/apicurio-studio-api:latest
resources:
requests:
cpu: 100m
memory: 800Mi
limits:
cpu: 1
memory: 1500Mi
ingress:
generateCert: false
secretRef: apicurio-studio-api-ingress-secret
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
wsModule: {}
studioModule: {}
keycloak:
install: true
realm: apicurio
volumeSize: 1Gi
database:
install: true
database: apicuriodb
driver: postgresql
type: postgresql9
volumeSize: 1Gi
features:
asyncAPI: true
graphQL: true
microcks:
apiUrl: https://microcks-microcks.apps.cluster-0f5f.0f5f.sandbox1056.opentlc.com/api
clientId: microcks-serviceaccount
clientSecret: ab54d329-e435-41ae-a900-ec6b3fe15c54
For a deployment on Vanilla Kubernetes, you'll need to specify url
attributes for both root and keycloak
elements:
apiVersion: studio.apicur.io/v1alpha1
kind: ApicurioStudio
metadata:
name: apicurio-sample
spec:
name: apicurio-sample
url: apicurio.192.168.64.11.nip.io
keycloak:
url: keycloak.apicurio.192.168.64.11.nip.io
The spec.url
will be used as a suffix to generate Ingresses
address for raw Apicurio components and you'll end up with:
apicurio-sample-ui.apicurio.192.168.64.11.nip.io
ingress for the frontend,apicurio-sample-ws.apicurio.192.168.64.11.nip.io
ingress for the WS server,apicurio-sample-api.apicurio.192.168.64.11.nip.io
ingress for the API server.
The spec.keycloak.url
will be sued as is to expose an ingress for accessing Keycloak.
The table below describe all the fields of the ApicurioStudio
CRD, providing information on what's mandatory and what's optional as well as default values.
TO FINALIZE
Creating an instance of ApicurioStudio
with embedded Keycloak and Database modules implies the deployment of 5 pods in addition of the operator itself.
$ kubectl get pods -n apicurio
NAME READY STATUS RESTARTS AGE
apicurio-sample-api-97666ccdf-vqfb8 1/1 Running 0 62m
apicurio-sample-auth-b864b59dd-fpkhx 1/1 Running 0 62m
apicurio-sample-db-6f8895cc4-tr2rh 1/1 Running 0 62m
apicurio-sample-ui-78d96ff4cf-6vtm8 1/1 Running 1 62m
apicurio-sample-ws-7d4f996679-tgfbd 1/1 Running 0 62m
apicurio-studio-operator-76d47d899f-2tzzm 1/1 Running 0 64m
The ApicurioStudio
is managing its status as a sub-customresource that is updated as the deployment is going on.
Each module of the studio has its own status tracking information as shown in the example below:
status:
uiModule:
error: false
lastTransitionTime: '2021-07-02T08:09:52.71923'
message: 1 ready replica(s)
state: READY
apiModule:
error: false
lastTransitionTime: '2021-07-02T08:08:28.552723'
message: 1 ready replica(s)
state: READY
message: All module deployments are ready
databaseModule:
error: false
lastTransitionTime: '2021-07-02T08:08:23.990673'
message: 1 ready replica(s)
state: READY
wsModule:
error: false
lastTransitionTime: '2021-07-02T08:08:30.833432'
message: 1 ready replica(s)
state: READY
error: false
state: READY
wsUrl: >-
apicurio-sample-ws-apicurio-operator.apps.cluster-9d5e.9d5e.sandbox1893.opentlc.com
apiUrl: >-
apicurio-sample-api-apicurio-operator.apps.cluster-9d5e.9d5e.sandbox1893.opentlc.com
studioUrl: >-
apicurio-sample-ui-apicurio-operator.apps.cluster-9d5e.9d5e.sandbox1893.opentlc.com
keycloakModule:
error: false
lastTransitionTime: '2021-07-02T08:09:51.319282'
message: 1 ready replica(s)
state: READY
keycloakUrl: >-
apicurio-sample-auth-apicurio-operator.apps.cluster-9d5e.9d5e.sandbox1893.opentlc.com
The operator is made of 2 modules:
api
contains the model for manipulating Custom Resources elements using Java,operator
contains the Kubernetes controller implementing the remediation logic. It is implemented in Quarkus.
Simply execute:
mvn clean install
Produce a native container image with the name elements specified within the pom.xml
:
``
mvn package -Pnative -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true
NOTE: To build a native image you must have GraalVM installed. See here for instructions on how to set it up.