d2iq-archive/spark-build

Spark job hangs forever at STATE_RUNNING - no logs

Closed this issue · 1 comments

Very often our submitted spark job hangs at STATE_RUNNING forever, sometimes it ends successfully (one on two submissions /one on three). There are no special information in logs what is happening there. It's hanging all the time at No credentials provided. Attempting to register without authentication

Submission command:
dcos spark run --submit-args='--class portal.spark.cassandra.app.ProductModelPerNrOfAlerts http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar'

Job driver log:

`I0626 12:47:03.403899 51546 fetcher.cpp:498] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/032243ea-4dad-479d-a83a-442e8900d95f-S10","items":[{"action":"BYPASS_CACHE","uri":{"cache":false,"extract":true,"value":"http:\/\/http-server.marathon.l4lb.thisdcos.directory:8080\/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/032243ea-4dad-479d-a83a-442e8900d95f-S10\/frameworks\/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031\/executors\/driver-20170626124703-0020\/runs\/b809f8fe-c67c-4e70-8349-cc32e391cf2f"}
I0626 12:47:03.405829 51546 fetcher.cpp:409] Fetching URI 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar'
I0626 12:47:03.405849 51546 fetcher.cpp:250] Fetching directly into the sandbox directory
I0626 12:47:03.405869 51546 fetcher.cpp:187] Fetching URI 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar'
I0626 12:47:03.405879 51546 fetcher.cpp:134] Downloading resource from 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' to '/var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S10/frameworks/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031/executors/driver-20170626124703-0020/runs/b809f8fe-c67c-4e70-8349-cc32e391cf2f/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar'
W0626 12:47:04.011155 51546 fetcher.cpp:289] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar
I0626 12:47:04.011188 51546 fetcher.cpp:547] Fetched 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' to '/var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S10/frameworks/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031/executors/driver-20170626124703-0020/runs/b809f8fe-c67c-4e70-8349-cc32e391cf2f/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar'
I0626 12:47:04.287168 51563 exec.cpp:161] Version: 1.0.3
I0626 12:47:04.290699 51578 exec.cpp:236] Executor registered on agent 032243ea-4dad-479d-a83a-442e8900d95f-S10
I0626 12:47:04.291497 51576 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SPARK_SCALA_VERSION=2.10 -e SPARK_SUBMIT_OPTS= -Dspark.mesos.driver.frameworkId=f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031-driver-20170626124703-0020  -e LIBPROCESS_IP=10.0.0.14 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-032243ea-4dad-479d-a83a-442e8900d95f-S10.b809f8fe-c67c-4e70-8349-cc32e391cf2f -v /var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S10/frameworks/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031/executors/driver-20170626124703-0020/runs/b809f8fe-c67c-4e70-8349-cc32e391cf2f:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-032243ea-4dad-479d-a83a-442e8900d95f-S10.b809f8fe-c67c-4e70-8349-cc32e391cf2f mesosphere/spark:1.1.0-2.1.1-hadoop-2.6 -c ./bin/spark-submit --name portal.spark.cassandra.app.ProductModelPerNrOfAlerts --master mesos://zk://master.mesos:2181/mesos --driver-cores 1.0 --driver-memory 1024M --class portal.spark.cassandra.app.ProductModelPerNrOfAlerts --conf "spark.app.name=portal.spark.cassandra.app.ProductModelPerNrOfAlerts" --conf "spark.driver.supervise=false" --conf "spark.mesos.executor.docker.image=mesosphere/spark:1.1.0-2.1.1-hadoop-2.6" $MESOS_SANDBOX/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
17/06/26 12:47:05 INFO SparkContext: Running Spark version 2.1.1
17/06/26 12:47:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/26 12:47:05 INFO SecurityManager: Changing view acls to: root
17/06/26 12:47:05 INFO SecurityManager: Changing modify acls to: root
17/06/26 12:47:05 INFO SecurityManager: Changing view acls groups to: 
17/06/26 12:47:05 INFO SecurityManager: Changing modify acls groups to: 
17/06/26 12:47:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/06/26 12:47:06 INFO Utils: Successfully started service 'sparkDriver' on port 36252.
17/06/26 12:47:06 INFO SparkEnv: Registering MapOutputTracker
17/06/26 12:47:06 INFO SparkEnv: Registering BlockManagerMaster
17/06/26 12:47:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/26 12:47:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/26 12:47:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b6485b63-219f-470b-8fbf-b88e71ba8938
17/06/26 12:47:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/06/26 12:47:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/26 12:47:06 INFO log: Logging initialized @1627ms
17/06/26 12:47:06 INFO Server: jetty-9.2.z-SNAPSHOT
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@71c5b236{/jobs,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2cab9998{/jobs/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2f7a7219{/jobs/job,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@669513d8{/jobs/job/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3a1d593e{/stages,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4a8a60bc{/stages/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@361c294e{/stages/stage,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7859e786{/stages/stage/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@285d851a{/stages/pool,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@314b8f2d{/stages/pool/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@664a9613{/storage,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5118388b{/storage/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@15a902e7{/storage/rdd,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7876d598{/storage/rdd/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4a3e3e8b{/environment,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5af28b27{/environment/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@71104a4{/executors,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4985cbcb{/executors/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@72f46e16{/executors/threadDump,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3c9168dc{/executors/threadDump/json,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@332a7fce{/static,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@549621f3{/,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@54361a9{/api,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@32232e55{/jobs/job/kill,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5217f3d0{/stages/stage/kill,null,AVAILABLE,@Spark}
17/06/26 12:47:06 INFO ServerConnector: Started Spark@74cd6c54{HTTP/1.1}{0.0.0.0:4040}
17/06/26 12:47:06 INFO Server: Started @1774ms
17/06/26 12:47:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/26 12:47:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.0.14:4040
17/06/26 12:47:06 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar at spark://10.0.0.14:36252/jars/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar with timestamp 1498481226455
I0626 12:47:06.642014    79 sched.cpp:226] Version: 1.0.1
I0626 12:47:06.646399    73 sched.cpp:330] New master detected at master@172.16.0.7:5050
I0626 12:47:06.647047    73 sched.cpp:341] No credentials provided. Attempting to register without authentication`

Dispatcher logs:
`I0626 11:49:24.684080 29954 exec.cpp:161] Version: 1.0.3
I0626 11:49:24.689530 29960 exec.cpp:236] Executor registered on agent 032243ea-4dad-479d-a83a-442e8900d95f-S17
I0626 11:49:24.691406 29961 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SPARK_HDFS_CONFIG_URL= -e MARATHON_APP_LABEL_DCOS_PACKAGE_SOURCE=https://universe.mesosphere.com/repo -e MARATHON_APP_VERSION=2017-06-26T11:48:29.450Z -e SPARK_DISPATCHER_MESOS_PRINCIPAL= -e HOST=10.0.0.8 -e MARATHON_APP_RESOURCE_CPUS=1.0 -e MARATHON_APP_LABEL_SPARK_URI=https://downloads.mesosphere.com/spark/assets/spark-2.1.1-bin-2.6.tgz -e MARATHON_APP_LABEL_DCOS_PACKAGE_REGISTRY_VERSION=3.0 -e MARATHON_APP_LABEL_DCOS_SERVICE_SCHEME=http -e DCOS_SERVICE_NAME=spark -e PORT_10106=21959 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_LABEL_DCOS_PACKAGE_RELEASE=26 -e MARATHON_APP_LABEL_DCOS_SERVICE_NAME=spark -e SPARK_DISPATCHER_MESOS_SECRET= -e MARATHON_APP_DOCKER_IMAGE=mesosphere/spark:1.1.0-2.1.1-hadoop-2.6 -e MARATHON_APP_LABEL_DCOS_PACKAGE_NAME=spark -e MARATHON_APP_LABEL_DCOS_PACKAGE_VERSION=1.1.0-2.1.1 -e MESOS_TASK_ID=spark.5f4eb636-5a65-11e7-bb68-70b3d5800003 -e PORT=21958 -e PORT_10105=21958 -e MARATHON_APP_LABEL_DCOS_SERVICE_PORT_INDEX=2 -e MARATHON_APP_RESOURCE_MEM=1024.0 -e PORT_10109=21960 -e SPARK_DISPATCHER_MESOS_ROLE=* -e PORTS=21958,21959,21960 -e PORT1=21959 -e MARATHON_APP_LABEL_DCOS_PACKAGE_IS_FRAMEWORK=false -e SPARK_USER=root -e MARATHON_APP_RESOURCE_DISK=0.0 -e MARATHON_APP_LABEL_DCOS_PACKAGE_COMMAND=eyJwaXAiOlsiaHR0cHM6Ly9kb3dubG9hZHMubWVzb3NwaGVyZS5jb20vc3BhcmsvYXNzZXRzLzEuMS4wLTIuMS4xL2Rjb3Nfc3BhcmstMC41LjE5LXB5Mi5weTMtbm9uZS1hbnkud2hsIl19 -e MARATHON_APP_LABELS=DCOS_PACKAGE_RELEASE DCOS_SERVICE_SCHEME DCOS_PACKAGE_SOURCE DCOS_PACKAGE_COMMAND DCOS_PACKAGE_REGISTRY_VERSION DCOS_SERVICE_NAME DCOS_PACKAGE_FRAMEWORK_NAME DCOS_SERVICE_PORT_INDEX DCOS_PACKAGE_VERSION SPARK_URI DCOS_PACKAGE_NAME DCOS_PACKAGE_IS_FRAMEWORK -e MARATHON_APP_LABEL_DCOS_PACKAGE_FRAMEWORK_NAME=spark -e SPARK_LOG_LEVEL=INFO -e MARATHON_APP_ID=/spark -e PORT0=21958 -e PORT2=21960 -e LIBPROCESS_IP=10.0.0.8 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-032243ea-4dad-479d-a83a-442e8900d95f-S17.7c40383f-e835-443e-8d55-74e81b0390b2 -v /var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S17/frameworks/032243ea-4dad-479d-a83a-442e8900d95f-0000/executors/spark.5f4eb636-5a65-11e7-bb68-70b3d5800003/runs/7c40383f-e835-443e-8d55-74e81b0390b2:/mnt/mesos/sandbox --net host --user=root --entrypoint /bin/sh --name mesos-032243ea-4dad-479d-a83a-442e8900d95f-S17.7c40383f-e835-443e-8d55-74e81b0390b2 mesosphere/spark:1.1.0-2.1.1-hadoop-2.6 -c /sbin/init.sh
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
+ export DISPATCHER_PORT=21958
+ DISPATCHER_PORT=21958
+ export DISPATCHER_UI_PORT=21959
+ DISPATCHER_UI_PORT=21959
+ export SPARK_PROXY_PORT=21960
+ SPARK_PROXY_PORT=21960
+ SCHEME=http
+ OTHER_SCHEME=https
+ [[ '' == true ]]
+ export DISPATCHER_UI_WEB_PROXY_BASE=/service/spark
+ DISPATCHER_UI_WEB_PROXY_BASE=/service/spark
+ grep -v '#https#' /etc/nginx/conf.d/spark.conf.template
+ sed s,#http#,,
+ sed -i 's,<PORT>,21960,' /etc/nginx/conf.d/spark.conf
+ sed -i 's,<DISPATCHER_URL>,http://10.0.0.8:21958,' /etc/nginx/conf.d/spark.conf
+ sed -i 's,<DISPATCHER_UI_URL>,http://10.0.0.8:21959,' /etc/nginx/conf.d/spark.conf
+ sed -i 's,<PROTOCOL>,,' /etc/nginx/conf.d/spark.conf
+ [[ '' == true ]]
+ [[ -f hdfs-site.xml ]]
+ [[ -n '' ]]
+ exec runsvdir -P /etc/service
+ mkdir -p /mnt/mesos/sandbox/spark
+ exec
+ exec svlogd /mnt/mesos/sandbox/spark
+ mkdir -p /mnt/mesos/sandbox/nginx
+ exec svlogd /mnt/mesos/sandbox/nginx` 

For example this is successfully finished driver's log. You can see after that line should be framework registration process.
I0626 11:58:45.799917 47658 fetcher.cpp:498] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/032243ea-4dad-479d-a83a-442e8900d95f-S10","items":[{"action":"BYPASS_CACHE","uri":{"cache":false,"extract":true,"value":"http:\/\/http-server.marathon.l4lb.thisdcos.directory:8080\/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/032243ea-4dad-479d-a83a-442e8900d95f-S10\/frameworks\/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031\/executors\/driver-20170626115845-0012\/runs\/2b15713e-ab11-4dcc-b379-747501d60a1e"} I0626 11:58:45.801811 47658 fetcher.cpp:409] Fetching URI 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' I0626 11:58:45.801836 47658 fetcher.cpp:250] Fetching directly into the sandbox directory I0626 11:58:45.801854 47658 fetcher.cpp:187] Fetching URI 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' I0626 11:58:45.801865 47658 fetcher.cpp:134] Downloading resource from 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' to '/var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S10/frameworks/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031/executors/driver-20170626115845-0012/runs/2b15713e-ab11-4dcc-b379-747501d60a1e/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' W0626 11:58:46.422735 47658 fetcher.cpp:289] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar I0626 11:58:46.422771 47658 fetcher.cpp:547] Fetched 'http://http-server.marathon.l4lb.thisdcos.directory:8080/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' to '/var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S10/frameworks/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031/executors/driver-20170626115845-0012/runs/2b15713e-ab11-4dcc-b379-747501d60a1e/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar' I0626 11:58:46.638218 47668 exec.cpp:161] Version: 1.0.3 I0626 11:58:46.642324 47678 exec.cpp:236] Executor registered on agent 032243ea-4dad-479d-a83a-442e8900d95f-S10 I0626 11:58:46.643647 47680 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 1073741824 -e SPARK_SCALA_VERSION=2.10 -e SPARK_SUBMIT_OPTS= -Dspark.mesos.driver.frameworkId=f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031-driver-20170626115845-0012 -e LIBPROCESS_IP=10.0.0.14 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-032243ea-4dad-479d-a83a-442e8900d95f-S10.2b15713e-ab11-4dcc-b379-747501d60a1e -v /var/lib/mesos/slave/slaves/032243ea-4dad-479d-a83a-442e8900d95f-S10/frameworks/f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031/executors/driver-20170626115845-0012/runs/2b15713e-ab11-4dcc-b379-747501d60a1e:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-032243ea-4dad-479d-a83a-442e8900d95f-S10.2b15713e-ab11-4dcc-b379-747501d60a1e mesosphere/spark:1.1.0-2.1.1-hadoop-2.6 -c ./bin/spark-submit --name portal.spark.cassandra.app.ProductModelPerNrOfAlerts --master mesos://zk://master.mesos:2181/mesos --driver-cores 1.0 --driver-memory 1024M --class portal.spark.cassandra.app.ProductModelPerNrOfAlerts --conf "spark.app.name=portal.spark.cassandra.app.ProductModelPerNrOfAlerts" --conf "spark.driver.supervise=false" --conf "spark.mesos.executor.docker.image=mesosphere/spark:1.1.0-2.1.1-hadoop-2.6" $MESOS_SANDBOX/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar WARNING: Your kernel does not support swap limit capabilities, memory limited without swap. 17/06/26 11:58:47 INFO SparkContext: Running Spark version 2.1.1 17/06/26 11:58:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/06/26 11:58:48 INFO SecurityManager: Changing view acls to: root 17/06/26 11:58:48 INFO SecurityManager: Changing modify acls to: root 17/06/26 11:58:48 INFO SecurityManager: Changing view acls groups to: 17/06/26 11:58:48 INFO SecurityManager: Changing modify acls groups to: 17/06/26 11:58:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 17/06/26 11:58:48 INFO Utils: Successfully started service 'sparkDriver' on port 40575. 17/06/26 11:58:48 INFO SparkEnv: Registering MapOutputTracker 17/06/26 11:58:48 INFO SparkEnv: Registering BlockManagerMaster 17/06/26 11:58:48 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 17/06/26 11:58:48 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 17/06/26 11:58:48 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e32ad26e-a3a3-4133-917d-a6767ab68b6f 17/06/26 11:58:48 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 17/06/26 11:58:48 INFO SparkEnv: Registering OutputCommitCoordinator 17/06/26 11:58:48 INFO log: Logging initialized @1707ms 17/06/26 11:58:48 INFO Server: jetty-9.2.z-SNAPSHOT 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@71c5b236{/jobs,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2cab9998{/jobs/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2f7a7219{/jobs/job,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@669513d8{/jobs/job/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3a1d593e{/stages,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4a8a60bc{/stages/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@361c294e{/stages/stage,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7859e786{/stages/stage/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@285d851a{/stages/pool,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@314b8f2d{/stages/pool/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@664a9613{/storage,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5118388b{/storage/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@15a902e7{/storage/rdd,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7876d598{/storage/rdd/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4a3e3e8b{/environment,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5af28b27{/environment/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@71104a4{/executors,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4985cbcb{/executors/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@72f46e16{/executors/threadDump,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3c9168dc{/executors/threadDump/json,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@332a7fce{/static,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@549621f3{/,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@54361a9{/api,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@32232e55{/jobs/job/kill,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5217f3d0{/stages/stage/kill,null,AVAILABLE,@Spark} 17/06/26 11:58:48 INFO ServerConnector: Started Spark@5e2c92e0{HTTP/1.1}{0.0.0.0:4040} 17/06/26 11:58:48 INFO Server: Started @1820ms 17/06/26 11:58:48 INFO Utils: Successfully started service 'SparkUI' on port 4040. 17/06/26 11:58:48 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.0.14:4040 17/06/26 11:58:48 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar at spark://10.0.0.14:40575/jars/analytics-spark-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar with timestamp 1498478328794 I0626 11:58:49.193374 78 sched.cpp:226] Version: 1.0.1 I0626 11:58:49.196720 69 sched.cpp:330] New master detected at master@172.16.0.5:5050 I0626 11:58:49.197013 69 sched.cpp:341] No credentials provided. Attempting to register without authentication I0626 11:58:49.341975 73 sched.cpp:743] Framework registered with f2c4b817-ad47-4ab0-8924-7c8c2cec8f5f-0031-driver-20170626115845-0012 17/06/26 11:58:49 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38277. 17/06/26 11:58:49 INFO NettyBlockTransferService: Server created on 10.0.0.14:38277 17/06/26 11:58:49 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 17/06/26 11:58:49 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.0.14, 38277, None) 17/06/26 11:58:49 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.0.14:38277 with 366.3 MB RAM, BlockManagerId(driver, 10.0.0.14, 38277, None) 17/06/26 11:58:49 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.0.14, 38277, None) 17/06/26 11:58:49 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.0.14, 38277, None) ......

Hello! Why was this issue closed? What is the solution or the diagnosis? Thank you!