This project is a web console designed to facilitate interactions with Apache Kafka® instances on Kubernetes, leveraging the Strimzi Cluster Operator. It is composed of three main parts:
- a REST API backend developed with Java and Quarkus
- a user interface (UI) built with Next.js and PatternFly
- a Kubernetes operator developed with Java and Quarkus
The future goals of this project are to provide a user interface to interact with and manage additional data streaming components such as:
- Apicurio Registry for message serialization and de-serialization + validation
- Kroxylicious
- Apache Flink
Contributions and discussions around use cases for these (and other relevant) components are both welcome and encouraged.
The console application may either be run in a Kubernetes cluster or locally to try it out.
Please refer to the installation README file for detailed information about how to install the latest release of the console in a Kubernetes cluster.
Running the console locally requires the use of a remote or locally-running Kubernetes cluster that hosts the Strimzi Kafka operator and any Apache Kafka® clusters that will be accessed from the console. To get started, you will need to provide a console configuration file and credentials to connect to the Kubernetes cluster where Strimzi and Kafka are available.
-
Using the console-config-example.yaml file as an example, create your own configuration in a file
console-config.yaml
in the repository root. Thecompose.yaml
file expects this location to be used and and difference in name or location requires an adjustment to the compose file. -
Install the prerequisite software into the Kubernetes cluster. This step assumes none have yet been installed.
./install/000-install-dependency-operators.sh <your namespace> ./install/001-deploy-prometheus.sh <your namespace> <your cluster base domain> ./install/002-deploy-console-kafka.sh <your namespace> <your cluster base domain>
Note that the Prometheus instance will be available at
http://console-prometheus.<your cluster base domain>
when this step completes. -
Provide the Prometheus endpoint, the API server endpoint, and the service account token that you would like to use to connect to the Kubernetes cluster. These may be placed in a
compose.env
file that will be detected when starting the console.CONSOLE_API_SERVICE_ACCOUNT_TOKEN=<TOKEN> CONSOLE_API_KUBERNETES_API_SERVER_URL=https://my-kubernetes-api.example.com:6443 CONSOLE_METRICS_PROMETHEUS_URL=http://console-prometheus.<your cluster base domain>
The service account token may be obtain using the
kubectl create token
command. For example, to create a service account named "console-server" (from console-server.serviceaccount.yaml with the correct permissions and a token that expires in 1 year (yq required):export NAMESPACE=<service account namespace> kubectl apply -n ${NAMESPACE} -f ./install/resources/console/console-server.clusterrole.yaml kubectl apply -n ${NAMESPACE} -f ./install/resources/console/console-server.serviceaccount.yaml yq '.subjects[0].namespace = strenv(NAMESPACE)' ./install/resources/console/console-server.clusterrolebinding.yaml | kubectl apply -n ${NAMESPACE} -f - kubectl create token console-server -n ${NAMESPACE} --duration=$((365*24))h
-
By default, the provided configuration will use the latest console release container images. If you would like to build your own images with changes you've made locally, you may also set the
CONSOLE_API_IMAGE
andCONSOLE_UI_IMAGE
in yourcompose.env
and build them withmake container-images
-
Start the environment with
make compose-up
. -
When finished with the local console process, you may run
make compose-down
to clean up.
We welcome contributions of all forms. Please see the CONTRIBUTING file for how to get started. Join us in enhancing the capabilities of this console for Apache Kafka® on Kubernetes.
Each release requires an open milestone that includes the issues/pull requests that are part of the release. All issues in the release milestone must be closed. The name of the milestone must match the version number to be released.
The release action flow requires that the following secrets are configured in the repository:
IMAGE_REPO_HOSTNAME
- the host (optionally including a port number) of the image repository where images will be pushedIMAGE_REPO_NAMESPACE
- namespace/library/user where the image will be pushedIMAGE_REPO_USERNAME
- user name for authentication to serverIMAGE_REPO_HOSTNAME
IMAGE_REPO_PASSWORD
- password for authentication to serverIMAGE_REPO_HOSTNAME
These credentials will be used to push the release image to the repository configured in the.github/workflows/release.yml
workflow.
Releases are performed by modifying the .github/project.yml
file, setting current-version
to the release version and next-version
to the next SNAPSHOT. Open a pull request with the changed project.yml
to initiate the pre-release workflows. At this phase, the project milestone will be checked and it will be verified that no issues for the release milestone are still open. Additionally, the project's integration test will be run.
Once approved and the pull request is merged, the release action will execute. This action will execute the Maven release plugin to tag the release commit, build the application artifacts, create the build image, and push the image to (currently) quay.io. If successful, the action will push the new tag to the Github repository and generate release notes listing all of the closed issues included in the milestone. Finally, the milestone will be closed.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.