apache-spark-on-k8s/spark

Permissions error when running spark-submit

jicowan opened this issue · 3 comments

When running the following command:

bin/spark-submit \
  --deploy-mode cluster \
  --class org.apache.spark.examples.SparkPi \
  --master k8s://https://<cluster_url> \
  --kubernetes-namespace default \
  --conf spark.executor.instances=5 \
  --conf spark.app.name=spark-pi \
  --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
  --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.5.0 \
  --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.5.0 \
  local:///~/Downloads/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.5.0.jar

I get the following errors:

2018-02-06 09:45:07 WARN  Config:305 - Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://<server_url>/api/v1/namespaces/default/pods. Message: User "system:anonymous" cannot list pods in the namespace "default"..

I have confirmed that my account has the ability to list, edit, create, and delete pods in the default namespace. I also created a service account for spark. I do not have the ability to run this under kubectl proxy.

I noticed that the user is system:anonymous. Is that expected?

I don't know. I'm logged in as a user - the kubernetes admin - that has all the necessary permissions to create the objects. Does spark-submit run under a different security context?

I had to pass in in the k8s API token (Oauth).