Copyright 2023 New Vector Ltd
This operator contains CRDs and a controller for managing numerous components from the Matrix stack.
The list of the managed components can be found in the watches.yml
file.
The operator supports running in any kubernetes cluster, including Openshift.
The operator is managing the components CRD to deploy the kubernetes workloads, ingresses, etc.
Each component CRD can be configured independantly from the others. Some components CRDs will be waiting for inputs generated by the deployment of other components.
To make it easier to write coherent and integrated components CRDs, it is possible to deploy the updater. The updater watches ElementDeployment
CRD, and generates element resources CRDs to be ingested by the operator.
This document will walk you through how to get started with our Element Starter Edition Core. We require that you have a kubernetes environment to deploy into. If you do not have a kuberentes environment in which to deploy this, we have had experience deploying into a single node microk8s environment.
- Kubernetes
cert-manager
will need to be installed and provide an appropriately configuredClusterIssuer
namedletsencrypt
- Ingress Controller with an
IngressClass
(in this case, we are using an ingress controller with a class name ofpublic
)
- PostgreSQL database with a UTF-8 encoding and a C Locale.
The first step is to start on a machine with helm v3 installed and configured with your kubernetes cluster and pull down the two charts that you will need.
First, let's add the starter edition repository to helm:
helm repo add ess-starter-edition-core https://vector-im.github.io/ess-starter-edition-core
Now that we have the repositories configured, we can verify this by:
helm repo list
and should see the following in that output:
NAME URL
ess-starter-edition-core https://vector-im.github.io/ess-starter-edition-core
To be able to run the helm charts, they will need a namespace to run in. You can make this whatever you would like, but for the sake of this guide, we will create an element-operator
namespace and an element-updater
namespace. To do this, please follow this step:
kubectl create ns element-operator
kubectl create ns element-updater
To install the helm charts and actually deploy the element-updater
and the element-operator
with their default configurations, simply run:
helm install element-updater ess-starter-edition-core/element-updater --namespace element-updater
helm install element-operator ess-starter-edition-core/element-operator --namespace element-operator
Now at this point, you should have the following two containers up and running:
[user@helm ~]$ kubectl get pods -n element-updater
NAME READY STATUS RESTARTS AGE
element-updater-controller-manager-5b4f9cc5d4-9krv6 2/2 Running 6 (8h ago) 2d
[user@helm ~]$ kubectl get pods -n element-operator
NAME READY STATUS RESTARTS AGE
element-operator-controller-manager-778c8bfbcf-4zzpl 2/2 Running 6 (8h ago) 2d
-
Create a CRD definition on your own starting from this base template:
apiVersion: matrix.element.io/v1alpha1 kind: ElementDeployment metadata: name: first-element namespace: element-onprem spec: global: k8s: ingresses: ingressClassName: "public" secretName: global config: genericSharedSecretSecretKey: genericSharedSecret domainName: "element.demo" components: elementWeb: secretName: external-elementweb-secrets k8s: ingress: tls: certmanager: issuer: letsencrypt mode: certmanager fqdn: "web.element.demo" synapse: secretName: external-synapse-secrets config: additional: | enable_registration: True enable_registration_without_verification: True postgresql: host: db.element.demo user: postgres database: postgres passwordSecretKey: pgpassword sslMode: disable k8s: ingress: tls: certmanager: issuer: letsencrypt mode: certmanager fqdn: "hs.element.demo" wellKnownDelegation: secretName: external-wellknowndelegation-secrets k8s: ingress: tls: certmanager: issuer: letsencrypt mode: certmanager slidingSync: config: postgresql: host: web.element.demo user: postgres database: postgres passwordSecretKey: pgpassword sslMode: disable syncSecretSecretKey: syncSecret k8s: ingress: tls: certmanager: issuer: letsencrypt mode: certmanager fqdn: "sync.element.demo"
For more information on this option, please see our Element Deployment CRD documentation. Note: At present, this has not been written.
N.B. This guide assumes that you are using the element-onprem
namespace for deploying Element. You can call it whatever you want and if it doesn't exist yet, you can create it with: kubectl create ns element-onprem
.
Now we need to load secrets into kubernetes so that the deployment can access them. If you built your own CRD from scratch, you will need to follow our Element Deployment CRD documentation.
Here is a basic python script to build the secrets you need to get started:
import os
import base64
import signedjson.key
from datetime import datetime
## Define the secrets file
SECRETS_FILE = 'secrets.yml'
## Function to generate a secret and format it properly
def generate_secret(name):
value = base64.b64encode(os.urandom(32)).decode('utf-8')
return f' {name}: "{value}"'
## Function to format postgres password
def encode_pgpassword(name, pgpassword):
encoded_value = base64.b64encode(pgpassword.encode('utf-8')).decode('utf-8')
return f' {name}: "{encoded_value}"'
## Function to generate unique signing key for Synapse
def generate_signing_key(name):
signing_key = signedjson.key.generate_signing_key(0)
value = f'{signing_key.alg} {signing_key.version} {signedjson.key.encode_signing_key_base64(signing_key)}'
encoded_value = base64.b64encode(value.encode('utf-8')).decode('utf-8')
return f' {name}: "{encoded_value}"'
## Check if the secrets file exists
if os.path.isfile(SECRETS_FILE):
timestamp = datetime.now().strftime("%s")
backup_file = f"{SECRETS_FILE}.bak.{timestamp}"
os.rename(SECRETS_FILE, backup_file)
print(f"Backing up pre-existing {SECRETS_FILE} file to {backup_file}.")
#Prompt user for Postgres Password
pgpassword = input("Enter your Postgres Password: ")
## Populate secrets
print("Populating secrets.")
with open(SECRETS_FILE, 'a') as f:
f.write('''apiVersion: v1
kind: Secret
metadata:
name: global
namespace: element-onprem
data:
''')
f.write(generate_secret("genericSharedSecret") + '\n')
f.write('''---
apiVersion: v1
kind: Secret
metadata:
name: external-synapse-secrets
namespace: element-onprem
data:
''')
f.write(generate_secret("macaroon") + '\n')
f.write(generate_secret("registrationSharedSecret") + '\n')
f.write(generate_signing_key("signingKey") + '\n')
f.write(encode_pgpassword("pgpassword", pgpassword) + '\n')
## Tell the user we are done
print(f"Done. Secrets are in {SECRETS_FILE}.")
create a file build_secrets.py
with this content and then run it with python3 ./build_secrets.py
to creat a secrets.yml
that can be added to the element-onprem
namespace with kubectl apply -f secrets.yml
At this point, we are ready to deploy the ElementDeployment CRD into our cluster with the following command:
kubectl apply -f ./deployment.yml -n element-onprem
To check on the progress of the deployment, you will first watch the logs of the updater:
kubectl logs -f -n element-updater element-updater-controller-manager-<rest of pod name>
You will have to tab complete to get the correct hash for the element-updater-controller-manager pod name.
Once the updater is no longer pushing out new logs, you can track progress with the operator or by watching pods come up in the element-onprem
namespace.
Operator status:
kubectl logs -f -n element-operator element-operator element-operator-controller-manager-<rest of pod name>
Watching pods come up in the element-onprem
namespace:
watch kubectl get pods -n element-onprem
To install the helm charts and actually deploy the element-updater
and the element-operator
with their default configurations, simply run:
helm repo update ess-starter-edition-core
helm upgrade element-updater ess-starter-edition-core/element-updater --namespace element-updater
helm upgrade element-operator ess-starter-edition-core/element-operator --namespace element-operator
If you have registration closed, you will need to be able to create new users. To do that with the starter edition core, you can use kubectl exec
to open a shell in the synapse pod and use the register_new_matrix_user
command to accomplish this action.
Let's look at how to do this. First, let's find the synapse-main
pod:
kubectl get pods -n element-onprem | grep synapse-main
In this case, we get output similar to:
first-element-synapse-main-0 1/1 Running 0 27m
Now that we know the pod name, we can run the kubectl exec
command:
kubectl exec -it -n element-onprem first-element-deployment-synapse-main-0 -- /bin/sh
and once in the shell, we can run:
register_new_matrix_user -c /config/homeserver.yaml
and this will allow us to register a new matrix user.
N.B. You will need to register an admin user to perform administrative functions on the server.