EnMasseProject/enmasse-workshop

Tuning cores and memory needed by Spark executors

Opened this issue · 0 comments

In order to share the Spark cluster between more spark-driver applications we need to tune the parameters for the spark-submit command related to cores per executors (--executor-cores, total-executor-core, --executor-memory...). The example start a one node Spark cluster with 8 cores.
The same should be considered for memory.

In the current status, the first spark-driver gets all 8 available cores and another one cannot run.