jitsi-contrib/jitsi-helm

Cannot see the other participants video

peter-chan-hkmci opened this issue · 8 comments

I am new in jitsi meet and I cannot see the other's video streaming.
Would like to know where should I start to debug the issue?

For the ingress config, we are

  1. Using GCP load balancer to ingress the request into GKE
  2. All network traffic will send to a nginx deployment
  3. The nginx deployment will proxy the traffic to jitsi services base on the host name in url (like: https://jitsi-dev.abc.com/)

Hello @peter-chan-hkmci!

AFAIK, Jitsi Meet requires you to set up a UDP transport for the videobridge (JVB) since a couple of versions ago. If you are unable to (or cannot afford to, e.g. due to security policies of your org) open a UDP port for the videobridge, you can use a TURN server like coturn to tunnel the UDP streams in TCP for your users instead.

I'm currently facing a similar issue, to be honest, but I managed to make it work using a NodePort service for JVB with an extra ConfigMap for Nginx Ingress Controller, that proxies the UDP streams with stream {} module.

I'm currently facing a similar issue, to be honest, but I managed to make it work using a NodePort service for JVB with an extra ConfigMap for Nginx Ingress Controller, that proxies the UDP streams with stream {} module.

Hi @spijet, do you have an example of the ConfigMap you have used for this?

I use this ConfigMap to make the Nginx Ingress Controller work with long-living UDP streams:

apiVersion: v1
data:
  ### This option does the magics:
  proxy-stream-responses: "999999999"
  worker-cpu-affinity: auto
  worker-processes: "6"
kind: ConfigMap
metadata:
  labels:
    app: ingress-nginx
  name: nginx-configuration
  namespace: ingress-nginx

And then it's the usual custom service definition:

apiVersion: v1
data:
  ### Set your preferred UDP port here:
  "30000": jitsi/jitsi-meet-jvb:30000
kind: ConfigMap
metadata:
  name: nginx-ingress-controller-udp
  namespace: ingress-nginx

If you use a Helm chart to manage your Nginx Ingress Controller (e.g. the one from Bitnami), you can add this snippet to the chart's values:

    udp:
      "10000": jitsi/jitsi-meet-jvb:10000

A much simpler way would be to just use a NodePort-type service that'd act as a port-forward for JVB.

Tried given values with

  • publicURL set to my webs url,

  • in jvb in service under type set to LoadBalancer

  • in jvb I have set UDPPort to 30010 (anything above 30000 seemed reasonable)

  • in jvb publicIP set to my public ip.

Since its a loadbalancer i got an local ip for it which in my case is 9.0.0.153, this i have then port forwarded on my router (9.0.0.153 on port 30010 UDP, also tried both same result). And when connecting I do not see other persons video, just a black screen. I am using traefik for the web part but since i am forwarding the jvb directly i dont think its likely to matter.

I have also tried NodePort with the exact same result. Any idea what I am missing here?

EDIT:
I left it for 10min didn't work, went out to eat came back and now it works... mysterious ways

I ran into this issue as well, it only affected Chrome users, and only video (audio was fine) with 3+ participants.

Ultimately, I discovered it was because websockets default to being disabled in the chart.

Set this in your chart values to fix it:

helm:
  values:
    jvb:
      websockets:
        enabled: true

In the end, the problem was caused by two WebSocket-related environment variables (ENABLE_COLIBRI_WEBSOCKET and ENABLE_XMPP_WEBSOCKET) being defaulted to true in Jitsi Meet Docker images and unset in this Chart, which resulted in k8s side being set up to work without WS support and containers themselves expecting it enabled.

Starting with v1.3.0 Chart release, these variables are explicitly set to false unless you enable WS support for either of the components:

# values.yaml
websockets:
  ## JVB/Colibri WS transport:
  #  (previously known as `.Values.jvb.websockets`)
  colibri:
    enabled: true
    serverID: <...>
  ## New option for Prosody/XMPP:
  xmpp:
    enabled: true