KAS Installer allows the deployment and configuration of Managed Kafka Service in a single K8s cluster.
- Prerequisites
- Description
- Usage
- Installation Modes
- Fleet Manager Parameter Customization
- SSO Providers
- Custom Components
- Using rhoas CLI
- Running the User Interface
- Custom TLS
- Legacy Scripts
- Running E2E Test Suite (experimental)
- jq
- curl
- OpenShift. In the future there are plans to make it compatible
with native K8s. Currently an OpenShift dedicated based environment is needed
(Currently needs to be a multi-zone cluster if you want to create a Kafka
instance through the fleet manager by using
managed_kafka.sh
). - git
- opm required to build custom kas-fleetshard OLM bundle from source (see Custom Components)
- oc
- kubectl
- openssl CLI tool
- rhoas CLI (https://github.com/redhat-developer/app-services-cli)
- A user with administrative privileges in the OpenShift cluster and is logged in using
oc
orkubectl
- yq if
kas-fleet-manager-service-template-params
is provided - OSD Cluster with the following specs. Clusters with fewer/smaller compute nodes may work, but have not been verified with kas-installer.
- Plan
developer.x1
- 6 compute nodes
- Size: m5.2xlarge
- MultiAz: N/A
- Plan
standard.x1
- 9 compute nodes (3 per zone)
- Size: m5.2xlarge
- MultiAz: True
- Plan
standard.x2
- 12 compute nodes (4 per zone)
- Size: m5.2xlarge
- MultiAz: True
- Plan
On Mac install:
- brew gsed
- brew coreutils
- brew openssl
KAS Installer deploys and configures the following components that are part of Managed Kafka Service:
- MAS SSO
- KAS Fleet Manager
- Observability Operator (via KAS Fleet Manager)
- sharded-nlb IngressController
- KAS Fleet Shard and Strimzi Operators (via KAS Fleet Manager)
It deploys and configures the components to the cluster set in the user's kubeconfig file.
Additionally, a single Data Plane cluster is configured ready to be used, in the same cluster set in the user's kubeconfig file.
- Create and fill the KAS installer configuration file
kas-installer.env
. Minimally, the values identified as [required] in kas-installer-defaults.env must be configured. - make sure you have run
oc login --server=<api cluster url|https://api.xxx.openshiftapps.com:6443>
to your target OSD cluster. You will be asked a password or a token - Run the KAS installer
kas-installer.sh
to deploy and configure Managed Kafka Service - Run
uninstall.sh
to remove KAS from the cluster. You should remove any deployed Kafkas before runnig this script.
Troubleshooting: If the installer crashed due to configuration error in kas-installer.env
, you often can rerun the installer after fixing the config issue.
It is not necessary to run uninstall before retrying.
Deploying a cluster with kas-fleet-manager in standalone
is the default for kas-installer (when OCM_SERVICE_TOKEN
is not defined).
In this mode, the fleet manager deploys the data plane components from OLM bundles.
NOTE:
In standalone mode, predefined bundles are used for Strimzi and KAS Fleetshard operators. To use a different bundle
you'll need to build a dev bundle and set either STRIMZI_OLM_INDEX_IMAGE
or KAS_FLEETSHARD_OLM_INDEX_IMAGE
environment variables.
Installation with OCM mode allows kas-fleet-manager to deploy the data plane components as OCM addons. This mode can be
used by setting OCM_SERVICE_TOKEN
to your OCM offline token and also setting
OCM_CLUSTER_ID
to the idenfier for the OSD cluster used in deployment.
NOTE:
In OCM mode, it may take up to 10 minutes after the kas-installer.sh
completes before the addon installations are ready. Until ready,
the fleet-manager API will reject new Kafka requests.
The kas-installer.sh
process will check for the presence of an executable named kas-fleet-manager-service-template-params
in
the project root. When available, it will be executed with the expection that key/value pairs will be output to stdout. The output
will be used when processing the kas-fleet-manager's service-template.yml.
The name of the executable is intentially missing an extension to indicate that any language may be used that is known to the user
to be supported in their own environment.
Note that oc process
requires that individual parameters are specified on a single line.
For example, to provide a custom KAS_FLEETSHARD_OPERATOR_SUBSCRIPTION_CONFIG
parameter value to the fleet manager template,
something like the following may be used for a kas-fleet-manager-service-template-params
executable:
#!/bin/bash
# Declare the subscription config using multi-line YAML
MY_CUSTOM_SUBSCRIPTION_CONFIG='---
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "500m"
env:
- name: SSO_ENABLED
value: "true"
- name: MANAGEDKAFKA_ADMINSERVER_EDGE_TLS_ENABLED
value: "true"
- name: STANDARD_KAFKA_CONTAINER_CPU
value: 500m
- name: STANDARD_KAFKA_CONTAINER_MEMORY
value: 4Gi
- name: STANDARD_KAFKA_JVM_XMS
value: 1G
- name: STANDARD_ZOOKEEPER_CONTAINER_CPU
value: 500m
'
# Quota override for my organization
MY_CUSTOM_ORG_QUOTA='
- id: <YOUR ORG HERE>
max_allowed_instances: 10
any_user: true
registered_users: []
'
# Serialize to a single line as JSON (subset of YAML) to standard output
echo "KAS_FLEETSHARD_OPERATOR_SUBSCRIPTION_CONFIG='$(echo "${MY_CUSTOM_SUBSCRIPTION_CONFIG}" | yq e -o=json -I=0)'"
# Disable organization quotas (allows deployment of developer instances)
echo "REGISTERED_USERS_PER_ORGANISATION='[]'"
# Custom organization quota (allows deployment of standard instances), disabled/commented here
#echo "REGISTERED_USERS_PER_ORGANISATION='$(echo "${MY_CUSTOM_ORG_QUOTA}" | yq e -o=json -I=0)'"
Similar to the kas-fleet-manager-service-template-params
previously described, the kas-installer.sh
process will check for the presence of an executable named kas-fleet-manager-secrets-template-params
in
the project root. When available, it will be executed with the expection that key/value pairs will be output to stdout. The output
will be used when processing the kas-fleet-manager's secrets-template.yml.
The default kas-fleet-manager configuration enables the deployment of two instance types, standard
and developer
. Permission
to create an instance of a particular type depends on the organization's quota for the user creating the instance.
- Creating a
developer
instance type requires that the user creating the instance does not have an instance quota. Use parameter customization to set theREGISTERED_USERS_PER_ORGANISATION
property to an empty array[]
. - Creating a
standard
instance type requires that the user creating the instance does have an instance quota. Use parameter customization to set theREGISTERED_USERS_PER_ORGANISATION
property to an array containing the user's org. See theMY_CUSTOM_ORG_QUOTA
variable in the sample script for an example.
To create a standard.x2
(or other non-default types) instance type, the plan
must be provided to the Kafka create request. If using the managed_kafka.sh
script, the --plan
argument may be used:
managed_kafka.sh --create mykafka --plan standard.x2
The default installation configurations will deploy kas-fleet-manager with a single cloud provider and region. The
provider name and region may be configured in the environment using the CLOUD_PROVIDER
and REGION
variables. See
the kas-installer-defaults.env
for the default values.
Depending on the cloud provider configured, additional provider-specific configuration may be required, for example
GCP_API_CREDENTIALS
for the gcp
CLOUD_PROVIDER
. Please see the documentation corresponding to the version of
kas-fleet-manager in use for details.
In order to allow clients to deploy Kafka clusters to multiple clusters and/or regions, the following parameters must be customized:
- KUBE_CONFIG in
kas-fleet-manager-secrets-template-params
(standalone only) - SUPPORTED_CLOUD_PROVIDERS in
kas-fleet-manager-service-template-params
- CLUSTER_LIST in
kas-fleet-manager-service-template-params
The following sections describe the contents of these parameters in detail.
The SUPPORTED_CLOUD_PROVIDERS parameter contains a list of supported cloud providers in a yaml format. See deploying-kas-fleet-manager-to-openshift.md for more details.
The CLUSTER_LIST parameter contains a list clusters for kas fleet manager. See data-plane-osd-cluster-options.md for more details.
When running in standalone mode, the KUBE_CONFIG contains cluster connection information for each cluster to which Kafka can be deployed. There should be one entry in KUBE_CONFIG for each entry in CLUSTER_LIST. For example, to generate a KUBE_CONFIG for two clusters:
oc login -u kubeadmin -p <first_cluster_password> <first_cluster_url>
oc config view --minify --raw > /tmp/c1.yaml
oc login -u kubeadmin -p <second_cluster_password> <second_cluster_url>
oc config view --minify --raw > /tmp/c2.yaml
(KUBECONFIG=/tmp/c1.yaml:/tmp/c2.yaml oc config view --raw)
The KUBE_CONFIG value must be base64 encoded. For example, the following may be used to encode the value in the kas-fleet-manager-secrets-template-params
:
echo "KUBE_CONFIG='$(echo "${KUBE_CONFIG}" | yq -o=json -I=0 | ${BASE64} -w0)'"
Custom domain name registration for the data plane routes may be enabled using the following configurations in the kas-fleet-manager
customization scripts. If KAFKA_DOMAIN_NAME
is not specified, the default value of in the
KFM template will be used.
kas-fleet-manager-service-template-params
echo "ENABLE_KAFKA_CNAME_REGISTRATION='true'" echo "KAFKA_DOMAIN_NAME='<your domain here>'"
kas-fleet-manager-secrets-template-params
echo "ROUTE53_ACCESS_KEY='<your AWS Route53 access key>'" echo "ROUTE53_SECRET_ACCESS_KEY='<your AWS Route53 secret access key>'"
With this configuration, you will likely also want to provide a certificate and key, either by generating one using
the custom TLS instructions, or by directly providing values for KAFKA_TLS_CERT
and KAFKA_TLS_KEY
in the kas-installer.env
file or environment.
The version of the Observability Operator deployed by kas-fleet-manager can be configured with the following parameters in kas-fleet-manager-service-template-params
:
- OBSERVABILITY_OPERATOR_INDEX_IMAGE
- OBSERVABILITY_OPERATOR_STARTING_CSV
Configuration of kas-fleet-manager's SSO providers is done by setting the SSO_PROVIDER_TYPE
configuration variable. When not set, the default provider is mas_sso
. To use RH SSO,
the variable may be set to redhat_sso
and additional configuration can be provided for REDHAT_SSO_HOSTNAME
(default sso.stage.redhat.com), REDHAT_SSO_REALM
(default redhat-external
),
REDHAT_SSO_CLIENT_ID
(required), and REDHAT_SSO_CLIENT_SECRET
(required). See the description for each variable in the kas-installer-defaults.env
file for more information.
The kas-fleet-manager will use MAS-SSO as an identity provider for its admin API endpoints by default. An alternate IdP
may be provided by setting ADMIN_API_SSO_BASE_URL
, ADMIN_API_SSO_REALM
, and ADMIN_API_SSO_ENDPOINT_URI
in your
kas-installer.env
configuration, or in the environment. See kas-installer-defaults.env
for the default values.
Users that need to configure a different set of client roles to authorize the admin API operation should use the ADMIN_AUTHZ_CONFIG
variable to override the defaults. This variable can be set using fleet manager parameter customization.
See the ADMIN_AUTHZ_CONFIG
parameter in kas-fleet-manager's templates/service-template.yml
file for the default value.
A token for use with the admin API endpoints by calling ./get_access_token.sh --sre-admin
(this is used internally by
managed_kafka.sh
when the --admin
flag is provided). By default, the script is configured for use with a local MAS-SSO
instance which has a client pre-configured for this purpose, kafka-admin
.
To provide a custom service account for use with the admin endpoints, provide values for the ADMIN_API_CLIENT_ID
and
ADMIN_API_CLIENT_SECRET
configuration values.
MAS SSO (must always available to support kas-fleet-manager admin operations) may optionally be installed with modified resource requests and limits. The two environment variable should be in JSON format on a single line.
Configuration of the SSO operator can be done using the MAS_SSO_OPERATOR_SUBSCRIPTION_CONFIG
variable, the format of which must conform to the Subscription Config object.
Example with custom CPU and memory requests:
MAS_SSO_OPERATOR_SUBSCRIPTION_CONFIG='{"resources": { "requests": { "cpu": "500m", "memory": "512Mi" }, "limits": { "cpu": "500m", "memory": "512Mi" }}}'
The MAS SSO Keycloak instance may be configured using the MAS_SSO_KEYCLOAK_RESOURCES
variable. The format is the same
as the Kubernetes resources
object for other types. See mas-sso/keycloak.yaml
for the defaults.
Example custom CPU and memory requests:
MAS_SSO_KEYCLOAK_RESOURCES='{ "requests": { "cpu": "500m", "memory": "768Mi" }, "limits": { "cpu": "500m", "memory": "768Mi" }}'
Custom-built components are supported for kas-fleet-manager and kas-fleetshard.
- kas-fleet-manager: see the documentation for
KAS_FLEET_MANAGER_IMAGE_BUILD
in the kas-installer-defaults.env file. - kas-fleetshard: prior to running
kas-installer.sh
, executeoperators/generate-kas-fleetshard-olm-bundle.sh
, passing the required parameters (seeoperators/generate-kas-fleetshard-olm-bundle.sh --help
for details). The script will build kas-fleetshard from source and an OLM bundle index, optionally updating thekas-installer.env
file with the configuration. Runningkas-installer.sh
will deploy the custom-built operator.
Use ./rhoas_login.sh
as a short cut to login to the CLI. Login using the username you specified as RH_USERNAME
in the env file. The password is the same as the RH_USERNAME
value.
Alternatively (and likely preferably), use ./login-all.sh
which makes sure you are also logged to your OpenShift Dedicated cluster.
There are a couple of things that are expected not to work when using the RHOAS CLI with a kas-installer installed instance. These are noted below.
- To create an account, run
rhoas service-account create --short-description foo --file-format properties
. - To list existing service accounts, run
rhoas service-account list
. - To remove an existing service account, run
rhoas service-account delete --id=<ID of service account>
.
- To create a cluster, run
rhoas kafka create --bypass-checks --provider aws --region us-east-1 --name <clustername>
. Note that--bypass-checks
is required as the T&Cs endpoint will not exist in your environment. The provider and region must be passed on the command line. - To list existing clusters, run
rhoas kafka list
- To remove an existing cluster, run
rhoas kafka delete --name <clustername>
.
To use these cli featurs, you must set MANAGEDKAFKA_ADMINSERVER_EDGE_TLS_ENABLED=true
in your kas-installer.env
so that the admin-server will run over TLS (edge terminated).
- To create a topic
rhoas kafka topic create --name=foo
- To grant access
rhoas kafka acl grant-access --topic=foo --all-accounts --producer
etc.
Only with SSO_PROVIDER_TYPE=redhat_sso
and REDHAT_SSO_HOSTNAME=sso.redhat.com
(production)
See the Running the UI wiki page for more detailed instructions.
Local instances of the app-services-ui,
kas-ui, and kafka-ui
may be run locally by using the ui/install.sh
script. Running the UI installation will start three containers using
podman
or docker
(auto-detected, but can be forced by setting CONTAINER_CLI
to docker
or podman
)
with a main entrypoint of https://127.0.0.1:1337
. The IP must be configured with name prod.foo.redhat.com
in the user's
local /etc/hosts
file.
...
127.0.0.1 prod.foo.redhat.com
...
The repository and branch (or tag/commit) maybe configured in the kas-installer.env
file using the following variables
APP_SERVICES_UI_GIT_URL
APP_SERVICES_UI_GIT_REF
KAS_UI_GIT_URL
KAS_UI_GIT_REF
KAFKA_UI_GIT_URL
KAFKA_UI_GIT_REF
Note, when navigating to https://prod.foo.redhat.com:1337/
, you may be prompted to login to sso.redhat.com
as well
as the local MAS-SSO instance. The credentials for MAS-SSO should be the value of the RH_USERNAME
variable for both
username and password.
Users may provide custom-generated TLS certificates using the gen-certs.sh
script. The output is placed in the certs
directory (ignored by git) and includes a CA certificate, CA key, server certificate, and server key. Each time the script
is run, the server files will be replaced, but the CA certificate and key will be retained. The server certificate is issued
specifically for the current session's domain name, as determined by the K8S_CLUSTER_DOMAIN
variable. To configure the
certificates for a Kafka instance, set the following variables in kas-installer.env
, where KAS_INSTALLER_HOME
is the
path to the project root:
KAFKA_TLS_CERT="$(cat ${KAS_INSTALLER_HOME}/certs/server-cert.pem)"
KAFKA_TLS_KEY="$(cat ${KAS_INSTALLER_HOME}/certs/server-key.pem)"
The CA certificate may be (for example) imported to your browser, enabling the generated server certificates used by the admin API endpoint and the UI to be trusted during testing without the needing to trust them individually.
Please favour using the rhoas command line. These scripts will be remove at some point soon.
The service_account.sh
script supports creating, listing, and deleting service accounts.
- To create an account, run
service_account.sh --create
. The new service account information will be printed to the console. Be sure to retain theclientID
andclientSecret
values to use when generating an access token or for connecting to Kafka directly. - To list existing service accounts, run
service_account.sh --list
. - To remove an existing service account, run
service_account.sh --delete <ID of service account>
.
- Run
get_access_token.sh
using theclientID
andclientSecret
as the first and second arguments. The generated access token and its expiration date and time will be printed to the console.
The managed_kafka.sh
script supports creating, listing, and deleting Kafka clusters.
- To create a cluster, run
managed_kafka.sh --create <cluster name>
. Progress will be printed as the cluster is prepared and provisioned. If kas-fleet-manager has been configured with more than one cloud provider or region, the--provider
and/or--region
arguments must be provided to themanaged_kafka.sh
script. Otherwise, the script will discover the single provider and region using the fleet manager's cloud provider endpoints. - To list existing clusters, run
managed_kafka.sh --list
. - To remove an existing cluster, run
managed_kafka.sh --delete <cluster ID>
. - To patch an existing cluster (for instance changing a strimzi version), run
managed_kafka.sh --admin --patch <cluster ID> '{ "strimzi_version": "strimzi-cluster-operator.v0.23.0-3" }'
- To use kafka bin scripts against pre existing kafka cluster, run
managed_kafka.sh --certgen <kafka id> <Service_Account_ID> <Service_Account_Secret>
. If you do not pass the <Service_Account_ID> <Service_Account_Secret> arguments, the script will attempt to create a Service_Account for you. The cert generation is already performed at the end of--create
. Point the--command-config flag
to the generated app-services.properties in the working directory.
- If there is already 2 service accounts pre-existing you must delete 1 of them for this script to work
To use the Kafka Cluster that is created with the managed_kafka.sh
script with command line tools like kafka-topics.sh
or kafka-console-consumer.sh
do the following.
- run
tool_access.sh <cluster name>
. This will generate the certificate / truststore andapp-services.properties
file. It will also create a service account and grant GROUP and TOPIC permissions. The bootstrap host will be displayed upon completion, and is also in the properties file as bootstrap.servers. - Execute your tool like
kafka-topics.sh --bootstrap-server <bootstrap-host>:443 --command-config app-services.properties --topic foo --create --partitions 9
- If you want to use a different service account, you may edit the
app-services.properties
file and update the username and password withclientID
andclientSecret
-
Install all cluster components using
kas-installer.sh
, ensure you enableMANAGEDKAFKA_ADMINSERVER_EDGE_TLS_ENABLED
as documented in service-customization -
Clone the e2e-test-suite repository locally and change directory to the test suite project root
-
Generate the test suite configuration with
${KAS_INSTALLER_DIR}/e2e-test-config.sh > config.json
. When usingkas-installer.sh
with theredhat_sso
provider type, the user accounts to be used by the E2E tests must be preconfigured. Thee2e-test-config.sh
script will set all variables named likeE2E_USER_
in the test suite'sconfig.json
file. TheE2E_USER_
prefix is not included in the JSON configuration.For example, an environment variable
E2E_USER_PRIMARY_USERNAME='my-primary-user'
will be added toconfig.json
as key/value pair"PRIMARY_USERNAME": "my-primary-user"
. -
Execute individual test classes:
./hack/testrunner.sh test KafkaAdminPermissionTest
./hack/testrunner.sh test KafkaInstanceAPITest
./hack/testrunner.sh test KafkaCLITest