Video & sounds stop working with more than 1 user
Aba007 opened this issue · 5 comments
Made a value.yaml file and deployed it. Everything looks great when testing it with 1 user on the dns link. When more users join, it starts breaking the connection. You can't see or hear anyone after 3 users.
is this an external traffic policy issue or something else?
My value.yaml file:
# Default values for jitsi-meet.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
# Set your cluster's DNS domain here.
# "cluster.local" should work for most environments.
# Set to "" to disable the use of FQDNs (default in older chart versions).
clusterDomain: cluster.local
podLabels: {}
podAnnotations: {}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
enableAuth: false
enableGuests: true
publicURL: ""
tz: Europe/Amsterdam
image:
pullPolicy: IfNotPresent
## WebSocket configuration:
#
# Both Colibri and XMPP WebSockets are disabled by default,
# since some LoadBalancer / Reverse Proxy setups can't pass
# WebSocket connections properly, which might result in breakage
# for some clients.
#
# Enable both Colibri and XMPP WebSockets to replicate the current
# upstream `meet.jit.si` setup. Keep both disabled to replicate
# older setups which might be more compatible in some cases.
websockets:
## Colibri (JVB signalling):
colibri:
enabled: false
## XMPP (Prosody signalling):
xmpp:
enabled: false
web:
replicaCount: 1
image:
repository: jitsi/web
extraEnvs: {}
service:
type: ClusterIP
port: 80
externalIPs: []
ingress:
enabled: true
ingressClassName: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: cloud-dns-issuer
hosts:
- host: test.vaba.dev
paths: ['/']
tls:
- secretName: jitsi-ingress-secret
hosts:
- test.vaba.dev
# Useful for ingresses that don't support http-to-https redirect by themself, (namely: GKE),
httpRedirect: false
# When tls-termination by the ingress is not wanted, enable this and set web.service.type=Loadbalancer
httpsEnabled: false
## Resolver IP for nginx.
#
# Starting with version `stable-8044`, the web container can
# auto-detect the nameserver from /etc/resolv.conf.
# Use this option if you want to override the nameserver IP.
#
# resolverIP: 10.43.0.10
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
httpGet:
path: /
port: 80
podLabels: {}
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
jicofo:
replicaCount: 1
image:
repository: jitsi/jicofo
xmpp:
password:
componentSecret:
livenessProbe:
tcpSocket:
port: 8888
readinessProbe:
tcpSocket:
port: 8888
podLabels: {}
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: {}
jvb:
replicaCount: 1
image:
repository: jitsi/jvb
xmpp:
user: jvb
password:
## Set public IP addresses to be advertised by JVB.
# You can specify your nodes' IP addresses,
# or IP addresses of proxies/LoadBalancers used for your
# Jitsi Meet installation. Or both!
#
# Note that only the first IP address will be used for legacy
# `DOCKER_HOST_ADDRESS` environment variable.
#
# publicIPs:
# - 1.2.3.4
# - 5.6.7.8
## Use a STUN server to help some users punch through some
# especially nasty NAT setups. Usually makes sense for P2P calls.
stunServers: 'meet-jit-si-turnrelay.jitsi.net:443'
## Try to use the hostPort feature:
# (might not be supported by some clouds or CNI engines)
useHostPort: false
## Use host's network namespace:
# (not recommended, but might help for some cases)
useHostNetwork: false
## UDP transport port:
UDPPort: 10000
## Use a pre-defined external port for NodePort or LoadBalancer service,
# if needed. Will allocate a random port from allowed range if unset.
# (Default NodePort range for K8s is 30000-32767)
# nodePort: 10000
service:
enabled:
type: LoadBalancer
externalTrafficPolicy: Cluster
externalIPs: []
## Annotations to be added to the service (if LoadBalancer is used)
##
annotations: {}
breweryMuc: jvbbrewery
livenessProbe:
httpGet:
path: /about/health
port: 8080
readinessProbe:
httpGet:
path: /about/health
port: 8080
podLabels: {}
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: {}
metrics:
enabled: false
prometheusAnnotations: false
image:
repository: docker.io/systemli/prometheus-jitsi-meet-exporter
tag: 1.2.1
pullPolicy: IfNotPresent
serviceMonitor:
enabled: true
selector:
release: prometheus-operator
interval: 10s
# honorLabels: false
resources:
requests:
cpu: 10m
memory: 16Mi
limits:
cpu: 20m
memory: 32Mi
octo:
enabled: false
jibri:
## Enabling Jibri will allow users to record
## and/or stream their meetings (e.g. to YouTube).
enabled: false
## Use external Jibri installation.
## This setting skips the creation of Jibri Deployment altogether,
## instead creating just the config secret
## and enabling recording/streaming services.
## Defaults to disabled (use bundled Jibri).
useExternalJibri: false
## Enable single-use mode for Jibri.
## With this setting enabled, every Jibri instance
## will become "expired" after being used once (successfully or not)
## and cleaned up (restarted) by Kubernetes.
##
## Note that detecting expired Jibri, restarting and registering it
## takes some time, so you'll have to make sure you have enough
## instances at your disposal.
## You might also want to make LivenessProbe fail faster.
singleUseMode: false
## Enable recording service.
## Set this to true/false to enable/disable local recordings.
## Defaults to enabled (allow local recordings).
recording: true
## Enable livestreaming service.
## Set this to true/false to enable/disable live streams.
## Defaults to disabled (livestreaming is forbidden).
livestreaming: false
## Enable multiple Jibri instances.
## If enabled (i.e. set to 2 or more), each Jibri instance
## will get an ID assigned to it, based on pod name.
## Multiple replicas are recommended for single-use mode.
replicaCount: 1
## Enable persistent storage for local recordings.
## If disabled, jibri pod will use a transient
## emptyDir-backed storage instead.
persistence:
enabled: false
size: 4Gi
## Set this to existing PVC name if you have one.
existingClaim:
storageClassName:
shm:
## Set to true to enable "/dev/shm" mount.
## May be required by built-in Chromium.
enabled: false
## If "true", will use host's shared memory dir,
## and if "false" — an emptyDir mount.
# useHost: false
# size: 256Mi
## Configure the update strategy for Jibri deployment.
## This may be useful depending on your persistence settings,
## e.g. when you use ReadWriteOnce PVCs.
## Default strategy is "RollingUpdate", which keeps
## the old instances up until the new ones are ready.
# strategy:
# type: RollingUpdate
image:
repository: jitsi/jibri
podLabels: {}
podAnnotations: {}
breweryMuc: jibribrewery
timeout: 90
## jibri XMPP user credentials:
xmpp:
user: jibri
password:
## recorder XMPP user credentials:
recorder:
user: recorder
password:
livenessProbe:
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 2
exec:
command:
- /bin/bash
- "-c"
- >-
curl -sq localhost:2222/jibri/api/v1.0/health
| jq '"\(.status.health.healthStatus) \(.status.busyStatus)"'
| grep -qP 'HEALTHY (IDLE|BUSY)'
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 2
exec:
command:
- /bin/bash
- "-c"
- >-
curl -sq localhost:2222/jibri/api/v1.0/health
| jq '"\(.status.health.healthStatus) \(.status.busyStatus)"'
| grep -qP 'HEALTHY (IDLE|BUSY)'
extraEnvs: {}
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
xmpp:
domain: meet.jitsi
authDomain:
mucDomain:
internalMucDomain:
guestDomain:
extraCommonEnvs: {}
prosody:
enabled: true
server:
extraEnvFrom:
- secretRef:
name: '{{ include "prosody.fullname" . }}-jicofo'
- secretRef:
name: '{{ include "prosody.fullname" . }}-jvb'
- configMapRef:
name: '{{ include "prosody.fullname" . }}-common'
## Uncomment this if you want to use jibri:
# - secretRef:
# name: '{{ include "prosody.fullname" . }}-jibri'
image:
repository: jitsi/prosody
tag: 'stable-8719'
I have exactly the same issue, I will try to investigate. Maybe it has something to do with reverse proxy in front of K8s.
Hello @Aba007!
I'm terribly sorry for the delay, a lot of stuff have been going on in my life right now.
If you have this issue only when there are 3 or more participants in the room, that means that the participants are unable to reach the JVB service. JVB (Jitsi VideoBridge) is used to route the audio and video streams between all room participants, which is a hard requirement when there are 3+ users in the same room (2 users can talk to each other directly in p2p fashion, bypassing the JVB altogether).
I see that your JVB component uses a LoadBalancer
service. Can you verify that it's actually reachable over UDP? We had some reports that some k8s providers and LoadBalancer
implementations don't support UDP, so that might be the issue here. JVB used to support TCP transport as well, but the implementation was really finicky and not as robust/secure as the UDP one, so it was deprecated and eventually dropped in favour of TURN-based setups some time ago.
If it turns out that JVBs' UDP port is unreachable through the LoadBalancer
service, there are three options I can propose:
- Try to switch from a
LoadBalancer
service to aNodePort
one, or usehostPort: true
if your k8s installation supports it. Should be no problem on private clusters with internet-facing public IPs, but can be tricky (or impossible) on cloud clusters; - Add a TURN server to the mix.
coturn
is the usual recommendation for Jitsi Meet, and the devs are currently working on providing a proper pre-configured Docker image for it, fit for use with the rest of Jitsi Meet. Unfortunately, I have no experience setting up a TURN server myself, so I can't help you here; - Switch to a different
LoadBalancer
implementation or a different k8s cluster, so that it supports ingress UDP.
Hi @spijet,
Adding the IP of the IP of the loadBalancer is also necessary for JVB to properly connect the users.
jvb:
replicaCount: 1
image:
repository: jitsi/jvb
xmpp:
user: jvb
password: WHATEVER
## Set public IP addresses to be advertised by JVB.
# You can specify your nodes' IP addresses,
# or IP addresses of proxies/LoadBalancers used for your
# Jitsi Meet installation. Or both!
#
# Note that only the first IP address will be used for legacy
# `DOCKER_HOST_ADDRESS` environment variable.
#
publicIPs:
- **1.2.3.4**
# - 1.2.3.4
# - 5.6.7.8
## Use a STUN server to help some users punch through some
# especially nasty NAT setups. Usually makes sense for P2P calls.
stunServers: 'meet-jit-si-turnrelay.jitsi.net:443'
## Try to use the hostPort feature:
# (might not be supported by some clouds or CNI engines)
useHostPort: true
useNodeIP: true
## Use host's network namespace:
# (not recommended, but might help for some cases)
useHostNetwork: true
## UDP transport port:
UDPPort: 10000
I'm not sure if you need useHostPort=true, we added it thinking in possible issues with double nats in K8s, but it's probably not necessary.
Hello @kpeiruza!
Of course, you have to let JVB know the address it should announce to the users. Right now I'm testing the current main
branch on a 4-node cluster with 4 instances of JVB running with this configuration:
jvb:
replicaCount: 4
UDPPort: 32768
useHostPort: true
publicIPs:
# A list of public IP addresses of
# all nodes in the cluster:
- 1.2.3.4
- 5.6.7.8
- 4.3.2.1
- 8.7.6.5
stunServers: meet-jit-si-turnrelay.jitsi.net:443,stun1.l.google.com:19302,stun2.l.google.com:19302,stun3.l.google.com:19302,stun4.l.google.com:19302
octo:
enabled: true
So far so good, it seems to be working just fine (and now I finally figured out how to make it work properly without WebSockets, yay!). Note that I'm using useHostPort: true
and not useHostNetwork
— this way (AFAICT) every node with JVB on it exposes only its own instance of JVB and not all of them, while not going for the risk of sharing the kernel network namespace between JVB and the host.
I'm going to close the issue now. Please reopen if you still have this problem.