[QUESTION] configmap not found
Wangdaoshuai opened this issue · 2 comments
Wangdaoshuai commented
- ✋ I have searched the open/closed issues and my issue is not listed.
I initiated a test task, but the driver pod showed that configmap was not found, and the entire test task failed
Appversion:v1beta2-1.4.5-3.5.0
kubectl get all -n spark-latest
NAME READY STATUS RESTARTS AGE
pod/spark-operator-5d7df588f-vctkn 1/1 Running 0 17h
pod/spark-operator-webhook-init--1-46xjf 0/1 Completed 0 17h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/spark-operator-webhook ClusterIP * <none> 443/TCP 17h
service/spark-pi-1-ui-svc ClusterIP * <none> 4040/TCP 15h
service/spark-pi-2-ui-svc ClusterIP * <none> 4040/TCP 23m
service/spark-pi-ui-svc ClusterIP * <none> 4040/TCP 55m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/spark-operator 1/1 1 1 17h
NAME DESIRED CURRENT READY AGE
replicaset.apps/spark-operator-5d7df588f 1 1 1 17h
NAME COMPLETIONS DURATION AGE
job.batch/spark-operator-webhook-init 1/1 2m18s 17h
kubectl describe driver-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28s default-scheduler Successfully assigned spark-latest/spark-pi-2-driver to *******, elapsedTime: 29.683152ms
Warning FailedMount 12s (x6 over 28s) kubelet MountVolume.SetUp failed for volume "spark-conf-volume-driver" : configmap "spark-drv-76a7dd8f561347f4-conf-map" not found
Warning FailedMount 12s (x6 over 28s) kubelet MountVolume.SetUp failed for volume "config-vol" : configmap "dummy-cm" not found
Test.yaml
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi-2
namespace: spark-latest
spec:
type: Scala
mode: cluster
image: "spark:3.5.0"
imagePullPolicy: IfNotPresent
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.5.0.jar"
sparkVersion: "3.5.0"
restartPolicy:
type: Never
volumes:
- name: config-vol
configMap:
name: dummy-cm
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.5.0
serviceAccount: spark-operator-spark
volumeMounts:
- name: config-vol
mountPath: /opt/spark/mycm
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.5.0
volumeMounts:
- name: config-vol
mountPath: /opt/spark/mycm
Provide a link to the example/module related to the question
Additional context
imtzer commented
Similar issuse as add retry config, The Driver in a Scheduled Spark Applications keeps getting stuck in ContainerCreating state due to missing ConfigMap which was described in Unable to Mount ConfigMap in Driver Pod
github-actions commented
This issue has been automatically marked as stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. Thank you for your contributions.