jitsi-contrib/jitsi-helm

can't get my first conf call - prosody stuck

cloud32 opened this issue · 2 comments

Hi,

installation passed OK

	[d50c-cml-app16 JITSI]$ helm install -f jitsi.yaml  myjitsi jitsi/jitsi-meet
		NAME: myjitsi
		LAST DEPLOYED: Tue Dec  6 08:18:40 2022
		NAMESPACE: default
		STATUS: deployed
		REVISION: 1
		NOTES:
		1. Get the application URL by running these commands:
		  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=jitsi-meet,app.kubernetes.io/component=web,app.kubernetes.io/instance=myjitsi" -o jsonpath="{.items[0].metadata.name}")
		  echo "Visit http://127.0.0.1:8080 to use your application"
		  kubectl --namespace default port-forward $POD_NAME 8080:80

	[d50c-cml-app16 JITSI]$ kubectl get pod
		NAME                                        READY   STATUS    RESTARTS   AGE
		myjitsi-jitsi-meet-jicofo-b86bdb469-mkpxj   0/1     Running   0          4s
		myjitsi-jitsi-meet-jvb-766c5c8984-qjjsd     0/1     Running   0          4s
		myjitsi-jitsi-meet-web-7f98b75c4d-gff6j     0/1     Running   0          4s
		myjitsi-prosody-0                           0/1     Pending   0          4s

	[d50c-cml-app16 JITSI]$ kubectl get service
		NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                AGE
		myjitsi-jitsi-meet-jvb   ClusterIP      10.104.75.209    <none>          10000/UDP                                                              33s
		myjitsi-jitsi-meet-web   ClusterIP      10.101.137.224   <none>          80/TCP                                                                 33s
		myjitsi-prosody          ClusterIP      10.110.85.66     <none>          5280/TCP,5281/TCP,5347/TCP,5222/TCP,5269/TCP                           33s

manually added NodePort services so that WEB & COLIBRI are reachable from outside the cluster

	[d50c-cml-app16 JITSI]$ kubectl get service
		NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                AGE
		jitsi-ext-jvb            NodePort       10.111.232.38    <none>          8080:8444/TCP                                                          6s
		jitsi-ext-web            NodePort       10.98.10.128     <none>          80:8443/TCP                                                            118s
		myjitsi-jitsi-meet-jvb   ClusterIP      10.103.227.8     <none>          10000/UDP                                                              5m23s
		myjitsi-jitsi-meet-web   ClusterIP      10.109.214.17    <none>          80/TCP                                                                 5m23s
		myjitsi-prosody          ClusterIP      10.96.250.39     <none>          5280/TCP,5281/TCP,5347/TCP,5222/TCP,5269/TCP                           5m23s

opening a Chrome browser to cluster's external IP @PORT 8443 works (Jitsi portal seen)
opening a Chrome browser to cluster's external IP @PORT 8444 works (JSON colibri ststa are retrieved)
creating a new meeting works

however - the user can't connect to the meeting.
repeating reconnect messages.
same from other browsers on other hosts trying to connect.

any ideas ?

so I tried checking (with tcpdump ...) what's going on inside the web pod and it seem to try and communicate with the prosody pod but with no success.

then i noticed that one of the 4 Jitsi pods is stuck in "Pending"
checking its log showd a "FailedScheduling" warning

	[d50c-cml-app16 JITSI]$ kubectl get pod
	NAME                                         READY   STATUS    RESTARTS   AGE
	myjitsi-jitsi-meet-jicofo-64dcdb55d8-6d24t   1/1     Running   0          22s
	myjitsi-jitsi-meet-jvb-7f6544f744-nmpfk      1/1     Running   0          22s
	myjitsi-jitsi-meet-web-7f98b75c4d-mp229      1/1     Running   0          22s
	myjitsi-prosody-0                            0/1     Pending   0          22s



	[d50c-cml-app16 JITSI]$ kubectl describe pod myjitsi-prosody-0
	Name:           myjitsi-prosody-0
	Namespace:      default
	Priority:       0
	Node:           <none>
	Labels:         app.kubernetes.io/instance=myjitsi
					app.kubernetes.io/name=prosody
					controller-revision-hash=myjitsi-prosody-65fd8ddb96
					statefulset.kubernetes.io/pod-name=myjitsi-prosody-0
	Annotations:    <none>
	Status:         Pending
	IP:             
	IPs:            <none>
	Controlled By:  StatefulSet/myjitsi-prosody
	Containers:
	  prosody:
		Image:       jitsi/prosody:stable-8044
		Ports:       5222/TCP, 5269/TCP, 5347/TCP, 5280/TCP, 5281/TCP
		Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
		Liveness:    http-get http://:bosh-insecure/http-bind delay=0s timeout=1s period=10s #success=1 #failure=3
		Readiness:   http-get http://:bosh-insecure/http-bind delay=0s timeout=1s period=10s #success=1 #failure=3
		Environment Variables from:
		  myjitsi-prosody         ConfigMap  Optional: false
		  myjitsi-prosody         Secret     Optional: false
		  myjitsi-prosody-jicofo  Secret     Optional: false
		  myjitsi-prosody-jvb     Secret     Optional: false
		  myjitsi-prosody-common  ConfigMap  Optional: false
		Environment:              <none>
		Mounts:
		  /config/data from prosody-data (rw)
		  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdrj6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  prosody-data:
		Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
		ClaimName:  prosody-data-myjitsi-prosody-0
		ReadOnly:   false
	  kube-api-access-sdrj6:
		Type:                    Projected (a volume that contains injected data from multiple sources)
		TokenExpirationSeconds:  3607
		ConfigMapName:           kube-root-ca.crt
		ConfigMapOptional:       <nil>
		DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
								 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  34s   default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
	  Warning  FailedScheduling  33s   default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
	[d50c-cml-app16 JITSI]$ 

so apparently it's a K8s PVC issue

	[d50c-cml-app16 JITSI]$ kubectl get pvc
	NAME                             STATUS    VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS           AGE
	prosody-data-myjitsi-prosody-0   Pending                                                                 9h

	[d50c-cml-app16 JITSI]$ kubectl describe pvc prosody-data-myjitsi-prosody-0
	Name:          prosody-data-myjitsi-prosody-0
	Namespace:     default
	StorageClass:  
	Status:        Pending
	Volume:        
	Labels:        app.kubernetes.io/instance=myjitsi
				   app.kubernetes.io/name=prosody
	Annotations:   <none>
	Finalizers:    [kubernetes.io/pvc-protection]
	Capacity:      
	Access Modes:  
	VolumeMode:    Filesystem
	Used By:       myjitsi-prosody-0
	Events:
	  Type    Reason         Age                  From                         Message
	  ----    ------         ----                 ----                         -------
	  Normal  FailedBinding  16s (x2202 over 9h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

closing this issue.