Error when running with a modified docker image
Closed this issue ยท 4 comments
It might be a naive question but I have modified the livy docker image to create a new one using the following
from sasnouskikh/livy:0.8.0-incubating-spark_3.0.1_2.12-hadoop_3.2.0_cloud
RUN python3 -m pip install avro
and I replaced the image repository in the values.yaml
of the livy chart as well as the value for LIVY_SPARK_KUBERNETES_CONTAINER_IMAGE
.
But now if I run the example pi spark example I get the following error in the job logs:
/opt/entrypoint.sh: line 45: /opt/spark/conf/spark-defaults.conf: Read-only file system
Any idea why? Everything ran just fine with the original image.
Hi, looks familiar to what I've seen before. I'll try it out on my end and let you know on the progress.
Thanks. I did get around the problem by simply providing the python dependencies using the pyfile argument in the batch create for livy but it would be useful to know why it happens in this particular case, just for the sake of knowledge.
Yo, was just struggling with this issue.
Basically, you modified the wrong image (it's livy
) with wrong entrypoint.sh
that just runs livy server
. You need to modify image that will be used by driver/executors
It breaks here
as the bash
script wants to read from /opt/spark/conf
which is mounted by k8s and is in essence dir with spark.properties
BR,
M
Indeed, thx @maciekdude . @MBtech, please use:
sasnouskikh/livy:<version>
- to run Livy server containerssasnouskikh/livy-spark:<version>
- to run Spark driver and executor containers with Livy support