munnerz/kube-plex

pms-elastic-transcoder pod fails to start - is this project still up to date?

Opened this issue · 7 comments

Hello,

Been trying to get this project going inside a Minikube cluster.
I can set it up, eventually after a-loooot of adjustments to the values the project kicked in and started OK.
I had to run with kube-plex disabled, as when it was enabled the pods would be going into ContainerCreating and straight into Error and then they disappeared.
Journalctl showed no logical reason, nor did Plex itself.

Considering the Chart ships with an older PMS version, is Kube-plex still current?
After 2-3 hours of indexing the data and streaming some episodes the cluster then stopped working entirely as the transcoder fails to start at all (with kube-plex false)

I'm not sure whether this project is still working at all?

I have no issue with using this as a helm chart for Plex within a k8s cluster. I do not use kube-plex for transcoding offloading (I have kubePlex.enabled set to false)

This project is in dire need of help from the community, @munnerz has been pretty busy with other projects.

Hi onedr0p,

My goal was to either have very few plex-kube-plex pods, offloading transcoding with kubeplex. OR scaling up plex-kube-plex pods balancing the transcoding within the core pods.
As the offloading failed with the pms-elastic-transcode pods dying everytime, I re-deployed (helm upgrade --install) setting it to false.
The transcoding then worked directly on the plex-kube-plex pods.
Even scaled up to 8 Pods, and I seem to be seamlessly moved between them during streaming and transcoding.

My issue then just suddenly began that no content wanted to be transcoded, and the plex logs dictated "transcode runner appears to have died". Tried to redeploy the entire helm chart but it never recovered..

Not entirely sure what caused it to be honest..
Would be great to get a refresh using the more up to date PMS images and methods.

I'm all ears if you could offer any expertise.

Sorry I cannot help much, but don't scale and I think you issue would be resolved. For reference check out my helm values here

It seems likely that this is resources related. The most common issue i see with minikube is resources. Run kubectl describe nodes, and try getting the logs from the pods with kubectl logs --follow [podname].

When you describe nodes, it'll probably report being out of memory. If so, you'll need to free up resources.

I believe I have a similar or related issue. Inspecting the transcode pods yields:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"mkdir /config/Library/Application Support/Plex Media Server/Cache/Transcode/Sessions/plex-transcode-4xibiojh3al6u0yrivdraaoi-4ec87297-1943-40b7-91ae-45a773fa4f49: read-only file system\"": unknown

There are multiple issues now regarding this - please see: #94 (comment)

I believe I have a similar or related issue. Inspecting the transcode pods yields:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"mkdir /config/Library/Application Support/Plex Media Server/Cache/Transcode/Sessions/plex-transcode-4xibiojh3al6u0yrivdraaoi-4ec87297-1943-40b7-91ae-45a773fa4f49: read-only file system\"": unknown

This is due to not setting the transcode directory in the Plex Server settings. Assuming you have left the values as default in the helm chart, then you need to set the transcode directory to /transcode
This should allow the transcoding within the Plex Media Server pod to work (but the 'elastic transcoder' will still not work as detailed in many other posts)