/adm-laboratory-spark

Tutorial on deploying Spark on top of Kubernetes

Primary LanguageShellMIT LicenseMIT

Create an Spark cluster using Kubernetes

Requirements

docker

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure.

Follow installation instructions here.

kind

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Follow installation instructions here.

kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

Follow installation instructions here.

helm

Helm is the package manager for Kubernetes. Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

Follow installation instructions here

Deploy Spark

Fortunately we have a Helm chart which deploys all the Spark components.

Clone this repository with:

git clone https://github.com/matthewrossi/adm-laboratory-spark.git

Create a local Kubernetes clusters using Docker container “nodes”:

kind create cluster --config=kind-config.yaml

Install Spark via Helm:

./install_spark.sh

Once the pods are running, you should see:

> kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
spark-master-0   1/1     Running   0          42m
spark-worker-0   1/1     Running   0          42m
spark-worker-1   1/1     Running   0          35m

Launch a test job

Get a terminal on the Spark master node:

./login_spark.sh

You have now access to the Spark 3.3.2 cluster. Launch a test MapReduce job to compute pi:

run-example SparkPi 10

Access the Spark Dashboard

You can also export the Spark dashboard from the cluster to your local machine.

./expose_spark.sh

Connect locally to port 8080 to check the status of the jobs.

Delete the local Kubernetes cluster

Don't forget to delete the local Kubernetes clusters with:

kind delete cluster

Otherwise kind will keep it running even after reboots.