passing templates in shinyproxy instance
aupadh12 opened this issue · 6 comments
Hi Team,
We are using shinyproxy operator to run multiple shinyproxy instances in different namespaces.
Currently, we are passing templates folder which has index.html and other files needed for customizing landing page during creating of shinyproxy image itself. Thus, whenever, we need to change the index.html, we have to go through our internal pipeline to recreate the shinyproxy image.
Is it possible to avoid this and pass the templates directly so that existing shinyproxy instance and new instances will be able to refer the same and changes in the index.html should be applied immediately?
Hi @aupadh12
There are a few options to implement this:
- mount the templates from some network share, e.g. using NFS, ceph, S3 etc
- adapt the docker container to download the files from somewhere just before ShinyProxy starts (e.g. from S3, git, some tarball)
- use Kubernetes configmaps to store the files, we have one deployment using Kustomize, where we have the following kustomize config:
configMapGenerator:
- name: sp-templates
files:
- templates/index.html
- templates/app.html
- templates/admin.html
- name: sp-assets
files:
- templates/assets/custom.css
- name: sp-fragments
files:
- templates/fragments/navbar.html
and we mount the files using:
apiVersion: openanalytics.eu/v1alpha1
kind: ShinyProxy
metadata:
spec:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes/-
value:
name: templates
configMap:
name: sp-templates
- op: add
path: /spec/volumes/-
value:
name: assets
configMap:
name: sp-assets
- op: add
path: /spec/volumes/-
value:
name: fragments
configMap:
name: sp-fragments
- op: add
path: /spec/containers/0/volumeMounts/-
value:
mountPath: /etc/shinyproxy/templates
name: templates
readOnly: true
- op: add
path: /spec/containers/0/volumeMounts/-
value:
mountPath: /etc/shinyproxy/templates/assets
name: assets
readOnly: true
- op: add
path: /spec/containers/0/volumeMounts/-
value:
mountPath: /etc/shinyproxy/templates/fragments
name: fragments
readOnly: true
IMO the configmaps solution is not the cleanest solution, but it works quite well in practice.
Hi @LEDfan ,
I tried to use the configmap approach and it did work for html, css and js files. But we also have images and that is creating an issue since its giving error stating the size is getting over the limit of what is allowed.
I am now thinking to use EFS and have it mounted as a PVC in the path where we need to give the templates (/opt/shinyproxy/templates).
In order to do so, I am using below code :
kubernetesPodTemplateSpecPatches: |
op: add
path: /spec/volumes
value:
name: shinyproxy-templates
persistentVolumeClaim:
claimName: shinyproxy-templates
op: add
path: /spec/containers/0/volumeMounts
value:
mountPath: "/opt/shinyproxy/templates"
name: shinyproxy-templates
But after trying this, I noticed that application is asking to login which means the openid authentication menthod which I used is not working. After removing the above section, openid authentication starts working.
Is there any issue with this kind of implementation where I use templates after mounting them as PVC?
Note: template_path - I tried template path as ./template and also /opt/shinyproxy/template
I also tried using below :
kubernetesPodTemplateSpecPatches: |
op: add
path: /spec/volumes/-
value:
name: shinyproxy-templates
persistentVolumeClaim:
claimName: shinyproxy-templates
op: add
path: /spec/containers/0/volumeMounts/-
value:
mountPath: "/opt/shinyproxy/templates"
name: shinyproxy-templates
But using above does not work either since then shinyproxy pod itself is not started.
Can you please check it once?
Hi
Regarding the first configuration: you are overriding the volumes and volumeMounts of the ShinyProxy pod, therefore it does not contain any reference to the application.yml
mount and the configmap will no longer get mounted.
But using above does not work either since then shinyproxy pod itself is not started.
At first sight the second config looks okay to me (but due to the formatting I cannot be 100% sure). What exactly do you mean with the above statement? Is the operator not creating a pod? Or does the created pod not fully startup?
In the first case, check the logs of the operator, it will probably be outputting an error regarding the patch. In the second case checks the log of the ShinyProxy pod.
I checked the logs of operator and found an error. However, I am not sure what to make of this.
kubernetesPodTemplateSpecPatches: | - op: add path: /spec/volumes/- value: - name: shinyproxy-templates persistentVolumeClaim: claimName: shinyproxy-templates - op: add path: /spec/containers/0/volumeMounts/- value: - mountPath: "/opt/shinyproxy/templates" name: shinyproxy-templates
This is the code which I have added in proper format.
Error from operator is mentioned below.
18:36:24.607 [pool-4-thread-1] WARN e.o.s.c.ShinyProxyController - Caught an exception while processing event. [Attempt 5/5]
java.lang.IllegalArgumentException: Cannot deserialize instance of io.fabric8.kubernetes.api.model.VolumeMount
out of START_ARRAY token
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: io.fabric8.kubernetes.api.model.PodTemplateSpec["spec"]->io.fabric8.kubernetes.api.model.PodSpec["containers"]->java.util.ArrayList[0]->io.fabric8.kubernetes.api.model.Container["volumeMounts"]->java.util.ArrayList[1])
at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:4234)
at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:4165)
at eu.openanalytics.shinyproxyoperator.components.PodTemplateSpecPatcher.patch(PodTemplateSpecPatcher.kt:58)
at eu.openanalytics.shinyproxyoperator.components.PodTemplateSpecFactory.create(PodTemplateSpecFactory.kt:125)
at eu.openanalytics.shinyproxyoperator.components.ReplicaSetFactory.create(ReplicaSetFactory.kt:56)
at eu.openanalytics.shinyproxyoperator.controller.ShinyProxyController.reconcileSingleShinyProxyInstance(ShinyProxyController.kt:243)
at eu.openanalytics.shinyproxyoperator.controller.ShinyProxyController.receiveAndHandleEvent$tryReceiveAndHandleEvent(ShinyProxyController.kt:108)
at eu.openanalytics.shinyproxyoperator.controller.ShinyProxyController.receiveAndHandleEvent(ShinyProxyController.kt:119)
at eu.openanalytics.shinyproxyoperator.controller.ShinyProxyController.run(ShinyProxyController.kt:69)
at eu.openanalytics.shinyproxyoperator.controller.ShinyProxyController$run$1.invokeSuspend(ShinyProxyController.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:178)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:166)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:397)
at kotlinx.coroutines.CancellableContinuationImpl.completeResume(CancellableContinuationImpl.kt:513)
at kotlinx.coroutines.channels.AbstractChannel$ReceiveElement.completeResumeReceive(AbstractChannel.kt:907)
at kotlinx.coroutines.channels.ArrayChannel.offerInternal(ArrayChannel.kt:83)
at kotlinx.coroutines.channels.AbstractSendChannel.send(AbstractChannel.kt:134)
at eu.openanalytics.shinyproxyoperator.controller.ResourceListener.enqueueResource(ResourceListener.kt:79)
at eu.openanalytics.shinyproxyoperator.controller.ResourceListener.access$enqueueResource(ResourceListener.kt:36)
at eu.openanalytics.shinyproxyoperator.controller.ResourceListener$1$onUpdate$2.invokeSuspend(ResourceListener.kt:51)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:274)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
at eu.openanalytics.shinyproxyoperator.controller.ResourceListener$1.onUpdate(ResourceListener.kt:51)
at eu.openanalytics.shinyproxyoperator.controller.ResourceListener$1.onUpdate(ResourceListener.kt:43)
at io.fabric8.kubernetes.client.informers.cache.ProcessorListener$UpdateNotification.handle(ProcessorListener.java:107)
at io.fabric8.kubernetes.client.informers.cache.ProcessorListener.run(ProcessorListener.java:57)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of io.fabric8.kubernetes.api.model.VolumeMount
out of START_ARRAY token
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: io.fabric8.kubernetes.api.model.PodTemplateSpec["spec"]->io.fabric8.kubernetes.api.model.PodSpec["containers"]->java.util.ArrayList[0]->io.fabric8.kubernetes.api.model.Container["volumeMounts"]->java.util.ArrayList[1])
Hi
You get this error because the patch you provided creates an invalid Kuberentes object.
I guess that this is the patch you are using:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes/-
value:
- name: shinyproxy-templates
persistentVolumeClaim:
claimName: shinyproxy-templates
- op: add path: /spec/containers/0/volumeMounts/-
value:
- mountPath: "/opt/shinyproxy/templates"
name: shinyproxy-templates
So what you are doing here is :
- add a new array to the end of the
/spec/volumes/
array - add a new array to the end of the
/spec/containers/0/volumeMounts/
array
Therefore, you get the error that when Kubernetes tried to read the YAML, that it cannot interpret an array as a VolumeMount object. The solution is to not add an array to the end of the /spec/containers/0/volumeMounts/
array, but to add an object. For example using this patch:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes/-
value:
name: shinyproxy-templates
persistentVolumeClaim:
claimName: shinyproxy-templates
- op: add path: /spec/containers/0/volumeMounts/-
value:
mountPath: "/opt/shinyproxy/templates"
name: shinyproxy-templates
The only difference is that the -
characters are removed after the value
key.
You can see that I did the same in the examples in my first reply. There are no -
when adding a value to the end of an array.