This tutorial is an adaptation of the original work done by Andrea Zonca in Install Hadoop on Kubernetes tutorial
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure.
Follow installation instructions here.
kind is a tool for running local Kubernetes clusters using Docker container "nodes".
Follow installation instructions here.
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
Follow installation instructions here.
Helm is the package manager for Kubernetes. Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Follow installation instructions here
Fortunately we have a Helm chart which deploys all the Hadoop components.
Clone this repository with:
git clone https://github.com/matthewrossi/adm-laboratory-hadoop.git
Create a local Kubernetes clusters using Docker container “nodes”:
kind create cluster --config=kind-config.yaml
Verify the configuration in stable_hadoop_values.yaml, I’m currently keeping it simple, so no persistence.
Install Hadoop via Helm:
./install_hadoop.sh
Once the pods are running, you should see:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
hadoop-hadoop-hdfs-dn-0 1/1 Running 0 68m
hadoop-hadoop-hdfs-dn-1 1/1 Running 0 60m
hadoop-hadoop-hdfs-nn-0 1/1 Running 0 68m
hadoop-hadoop-yarn-nm-0 1/1 Running 0 68m
hadoop-hadoop-yarn-nm-1 1/1 Running 0 59m
hadoop-hadoop-yarn-rm-0 1/1 Running 0 68m
Get a terminal on the YARN node manager:
./login_yarn.sh
You have now access to the Hadoop 3.3.2 cluster. Launch a test MapReduce job to compute pi:
bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.2.jar pi 16 1000
You can also export the YARN dashboard from the cluster to your local machine.
./expose_yarn.sh
Connect locally to port 8088 to check the status of the jobs.
Don't forget to delete the local Kubernetes clusters with:
kind delete cluster
Otherwise kind
will keep it running even after reboots.