/adm-laboratory-hadoop

Tutorial on deploying Hadoop on top of Kubernetes

Primary LanguageShellMIT LicenseMIT

Create an Hadoop cluster using Kubernetes

This tutorial is an adaptation of the original work done by Andrea Zonca in Install Hadoop on Kubernetes tutorial

Requirements

docker

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure.

Follow installation instructions here.

kind

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Follow installation instructions here.

kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

Follow installation instructions here.

helm

Helm is the package manager for Kubernetes. Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

Follow installation instructions here

Deploy Hadoop

Fortunately we have a Helm chart which deploys all the Hadoop components.

Clone this repository with:

git clone https://github.com/matthewrossi/adm-laboratory-hadoop.git

Create a local Kubernetes clusters using Docker container “nodes”:

kind create cluster --config=kind-config.yaml

Verify the configuration in stable_hadoop_values.yaml, I’m currently keeping it simple, so no persistence.

Install Hadoop via Helm:

./install_hadoop.sh

Once the pods are running, you should see:

> kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
hadoop-hadoop-hdfs-dn-0   1/1     Running   0          68m
hadoop-hadoop-hdfs-dn-1   1/1     Running   0          60m
hadoop-hadoop-hdfs-nn-0   1/1     Running   0          68m
hadoop-hadoop-yarn-nm-0   1/1     Running   0          68m
hadoop-hadoop-yarn-nm-1   1/1     Running   0          59m
hadoop-hadoop-yarn-rm-0   1/1     Running   0          68m

Launch a test job

Get a terminal on the YARN node manager:

./login_yarn.sh

You have now access to the Hadoop 3.3.2 cluster. Launch a test MapReduce job to compute pi:

bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.2.jar pi 16 1000

Access the YARN Dashboard

You can also export the YARN dashboard from the cluster to your local machine.

./expose_yarn.sh

Connect locally to port 8088 to check the status of the jobs.

Delete the local Kubernetes cluster

Don't forget to delete the local Kubernetes clusters with:

kind delete cluster

Otherwise kind will keep it running even after reboots.