More then 2 client in room lead to problem with socket and video streaming
DeamonMV opened this issue · 9 comments
What do I have
Kubernetes cluster
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:26:19Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.7", GitCommit:"42c05a547468804b2053ecf60a3bd15560362fc2", GitTreeState:"clean", BuildDate:"2022-05-24T12:24:41Z", GoVersion:"go1.17.10", Compiler:"gc", Platform:"linux/amd64"}
helm
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}`
jitsi deployed by helm
part of helm config
jvb:
service:
enabled: true
type: ClusterIP
publicIP: 1.2.3.4
UDPPort: 31848
replicaCount: 1
image:
repository: jitsi/jvb
xmpp:
user: jvb
password:
stunServers: 'meet-jit-si-turnrelay.jitsi.net:443'
useHostPort: true
useNodeIP: true
breweryMuc: jvbbrewery
livenessProbe:
httpGet:
path: /about/health
port: 8080
readinessProbe:
httpGet:
path: /about/health
port: 8080
ingress config:
kubectl -n ingress-nginx get configmaps ingress-nginx-udp -oyaml
apiVersion: v1
data:
"31848": namespace-jitsi/jitsi-jitsi-meet-jvb:31848::
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
creationTimestamp: "2022-09-28T09:47:57Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
helm.sh/chart: ingress-nginx-4.1.4
name: ingress-nginx-udp
namespace: ingress-nginx
resourceVersion: "251710"
uid: f31cf8d9-0449-4160-b878-9f08b2791caa
JVB version image: jitsi/jvb:stable-6865
My domain is behind Cloudflare "proxy" feature. On kubernetes side I do have an Origin Cloudflare certificate
Problem
When I connect to the room with more then 2(two) clients - I stop see a video stream from the clients
So while in room 2(two) clients I can see a video.
In jvb
I have such messages:
JVB 2022-09-28 13:53:35.534 INFO: [57] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd componentId=1 conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt name=stream-181c0a95 epId=181c0a95 local_ufrag=cto371ge1v3rqt] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-181c0a95.RTP: 10.119.12.210:40555/udp
JVB 2022-09-28 13:53:35.534 INFO: [57] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd componentId=1 conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt name=stream-181c0a95 epId=181c0a95 local_ufrag=cto371ge1v3rqt] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-181c0a95.RTP: 192.168.155.22:56254/udp
JVB 2022-09-28 13:53:35.535 INFO: [57] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd componentId=1 conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt name=stream-181c0a95 epId=181c0a95 local_ufrag=cto371ge1v3rqt] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-181c0a95.RTP: 192.168.10.3:42715/udp
JVB 2022-09-28 13:53:35.535 INFO: [57] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd componentId=1 conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt name=stream-181c0a95 epId=181c0a95 local_ufrag=cto371ge1v3rqt] Component.updateRemoteCandidates#484: new Pair added: 10.233.88.17:31848/udp/host -> 10.119.12.210:40555/udp/host (stream-181c0a95.RTP).
JVB 2022-09-28 13:53:35.535 INFO: [57] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd componentId=1 conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt name=stream-181c0a95 epId=181c0a95 local_ufrag=cto371ge1v3rqt] Component.updateRemoteCandidates#484: new Pair added: 10.233.88.17:31848/udp/host -> 192.168.155.22:56254/udp/host (stream-181c0a95.RTP).
JVB 2022-09-28 13:53:35.535 INFO: [57] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd componentId=1 conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt name=stream-181c0a95 epId=181c0a95 local_ufrag=cto371ge1v3rqt] Component.updateRemoteCandidates#484: new Pair added: 10.233.88.17:31848/udp/host -> 192.168.10.3:42715/udp/host (stream-181c0a95.RTP).
JVB 2022-09-28 13:53:35.538 WARNING: [102] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt epId=181c0a95 local_ufrag=cto371ge1v3rqt] ConnectivityCheckClient.startCheckForPair#374: Failed to send BINDING-REQUEST(0x1)[attrib.count=6 len=92 tranID=0x32F4F18383019F0153752477]
java.lang.IllegalArgumentException: No socket found for 10.233.88.17:31848/udp->10.119.12.210:40555/udp
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:631)
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:581)
at org.ice4j.stack.StunClientTransaction.sendRequest0(StunClientTransaction.java:267)
at org.ice4j.stack.StunClientTransaction.sendRequest(StunClientTransaction.java:245)
at org.ice4j.stack.StunStack.sendRequest(StunStack.java:680)
at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:335)
at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:231)
at org.ice4j.ice.ConnectivityCheckClient$PaceMaker.run(ConnectivityCheckClient.java:938)
at org.ice4j.util.PeriodicRunnable.executeRun(PeriodicRunnable.java:206)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2022-09-28 13:53:35.538 INFO: [102] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt epId=181c0a95 local_ufrag=cto371ge1v3rqt] ConnectivityCheckClient$PaceMaker.run#942: Pair
failed: 10.233.88.17:31848/udp/host -> 10.119.12.210:40555/udp/host (stream-181c0a95.RTP)
JVB 2022-09-28 13:53:35.555 WARNING: [102] [confId=3ff9288a60a86ebb gid=70464 stats_id=Lavada-uhd conf_name=test2@muc.meet.jitsi ufrag=cto371ge1v3rqt epId=181c0a95 local_ufrag=cto371ge1v3rqt] ConnectivityCheckClient.startCheckForPair#374: Failed to send BINDING-REQUEST(0x1)[attrib.count=6 len=92 tranID=0x43F4F18383011ECDAD3835B8]
Question
What I can do about it? How to fix problem?
If need to logs of info please let me know
Thank you.
Hello @DeamonMV!
Sorry for late reply. Can you verify that your CF proxy can pass UDP traffic (or that your setup announces your server's direct IP for JVB access)?
I use a reverse proxy in front of my k8s cluster too, but in my case the proxy node NATs all UDP traffic to the JVB as well. I'm currently running a second-to-last stable version of Jitsi (tag stable-7648-4
) with these settings:
jvb:
image:
tag: 'stable-7648-4'
metrics:
enabled: true
publicIP: X.X.X.X
stunServers: "stun1.l.google.com:19302,stun2.l.google.com:19302,stun3.l.google.com:19302,stun4.l.google.com:19302"
websockets:
enabled: true
I also have this in my Nginx Ingress Controller ConfigMap:
data:
proxy-stream-responses: "999999999"
This option forces Nginx's stream
module to wait and proxy the UDP session for as many response packets from the server as possible. The default value for this option in Nginx Ingress Controller's config template is "1"
, which tears down the UDP session as soon as JVB sends it's first reply packet.
I'm experiencing the exact same issue with a (non-cloudflare) TCP/UDP proxy in front of my ingress controller and jvb LoadBalancer
service. I'm wondering if it has to do with jvb not getting the client's real IP address. I am using traefik as the TCP/UDP proxy in front of my cluster which supports passing client IP information via the PROXY
protocol. According to this forum post I stumbled across, it doesn't seem like jvb can directly accept traffic with the PROXY
protocol, rather I need to configure a TURN server (I have experience with coturn which supports the PROXY
protocol). I will try to get this functioning and report back.
we leaved this idea - to use Jitsi - we found some other problems - and decided to to leave it.
Anyway Thank you for your response
btw: issue I'll keep open, maybe someone will find a good solution.
I didn't end up having to implement my idea. It turned out I had forgotten to open the JVB udp port in the firewall for my openstack instance running the proxy. Everything is working now. I also bumped the version of all the container images for this jitsi deployment to stable-7648-4
.
@DeamonMV, can you please share the problems you found? Maybe there's something we can fix in the chart.
@starcraft66, glad to hear. Yep, stable-7648-4
works with this chart without any problems, but there are some updates to be made before stable-7830
is useable. I'm going to do an extensive testing later today and make a PR if it works as expected.
(Originally this was supposed to be a reply to #58 (comment), but either my e-mail client was being lazy or GitHub forgot to post the comment in time, so it appeared here. Sorry.)
@starcraft66, this is true, JVB cannot parse proxy protocol headers properly. But I'm not sure if it's possible to use proxy protocol for UDP at all. :)
In my case the Ingress Controller (nginx
) proxies all UDP streams to/from JVB purely in userspace, losing client's IP address in the process (JVB sees all clients as connected from the nginx
pod IP).
Since #60 is merged now, newer versions (stable-7830
and up) should work too. Feel free to reopen if you have any problems.
I'm experiencing the exact same issue with a (non-cloudflare) TCP/UDP proxy in front of my ingress controller and jvb
LoadBalancer
service. I'm wondering if it has to do with jvb not getting the client's real IP address. I am using traefik as the TCP/UDP proxy in front of my cluster which supports passing client IP information via thePROXY
protocol. According to this forum post I stumbled across, it doesn't seem like jvb can directly accept traffic with thePROXY
protocol, rather I need to configure a TURN server (I have experience with coturn which supports thePROXY
protocol). I will try to get this functioning and report back.
Hello,
Can you share you traefik config ?
Can you scale your jvb pods in your config ?
Hello @jvkassi!
@Jimmy-SafeCash had some success with running this chart with Traefik as ingress proxy in #64, so you can ask them about it in the discussion thread.