jitsi-multi-shard

Intro

A multi shard deployment of jitsi-meet based from the hpi-schul-cloud/jitsi-deployment.

Prerequisites

  • Nginx-Ingress Controller
  • Metacontroller
    • Install the aporquez/metacontroller.
    • This is use for creating the NodePort type Service for each jvb pods.
    • IMPORTANT: You can only install one copy of this in a given Kubernetes cluster, due to the fact that it has CRDs included in it. You'll get an error if you attempt to install a second copy.
  • Label the kubernetes nodes for the zones. Shard-0 for ZONE_0, Shard-1 for ZONE_1, Shard-2 for ZONE_2, etc.
    • e.g. kubectl label nodes node-name topology.kubernetes.io/zone=ZONE_0
  • Open the kubernetes node ports 30300-30399 for shard-0, 30400-30499 for shard-1, 30500-30599 for shard-2, etc.
    • The range of ports is dependent on the maximum replica of the jvb.
  • Prometheus Adapter

Deployment

Namespace

kubectl create namespace jitsi
kubectl label namespace jitsi

TURN/STUN

Not configured.

ENABLE AUTH

Uncomment the environment variables of Auth of the following:

Include the following variables in the creation of secret.

  • JWT_APP_ID
  • JWT_APP_SECRET
  • JWT_ACCEPTED_ISSUERS
  • JWT_ACCEPTED_AUDIENCES

Create the k8s secret of jitsi

Using kubectl

kubectl create secret generic jitsi-config -n jitsi --from-literal=JICOFO_COMPONENT_SECRET=replace --from-literal=JICOFO_AUTH_PASSWORD=replace --from-literal=JVB_AUTH_PASSWORD=replace

Using kustomize

cd yaml-k8s/yaml-overlay/dev
kustomize edit add secret jitsi-config --disableNameSuffixHash --from-literal=JICOFO_COMPONENT_SECRET=replace --from-literal=JICOFO_AUTH_PASSWORD=replace --from-literal=JVB_AUTH_PASSWORD=replace

Update the deployment specific settings of overlays

  • PUBLIC_URL
  • HAProxy backend web
    • Update the backend server of the haproxy.
    • This is dependent on the namespace name. From _http._tcp.web.jitsi.svc.cluster.local to _http._tcp.web.namespace-name.svc.cluster.local. E.g. _http._tcp.web.jitsi-dev.svc.cluster.local.

Monitoring

By default, the jvb-statefulset, prosody-deployment, and haproxy-statefulset are using the annotation "prometheus.io/scrape" for monitoring. Select the Option 1 if you are going to use this. If you don't want to use the annotation, remove it and follow the Option 2.

Add the custom metric to the Prometheus Adapter. The jvb-hpa depends on the custom metric container_network_transmit_bytes_per_second to autoscale the jvb pods.

- seriesQuery: 'container_network_transmit_bytes_total{interface="eth0"}'
      resources:
        overrides:
          namespace: {resource: "namespace"}
          pod: {resource: "pod"}
      name:
        matches: "^(.*)_total"
        as: "${1}_per_second"
      metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[3m])) by (<<.GroupBy>>)'

TODO

  • Create a helm chart version

References