This is the companion repo to my LinkedIn Learning Courses on Apache Hadoop and Apache Spark.
🐘 1. Learning Hadoop - link
- uses mostly GCP Dataproc
- for running Hadoop & associated libraries (i.e. Hive, Pig, Spark...) workloads
🌩️ 2. Cloud Hadoop: Scaling Apache Spark - link
- uses GCP DataProc, AWS EMR --or--
- Databricks on AWS
⛈️ 3. Azure Databricks Spark Essential Training - link
- uses Azure with Databricks
- for scaling Apache Spark workloads
You have a number of options - although it is possible for you to set up a local Hadoop/Spark cluster, I do NOT recommended this approach as it's needlessly complex for initial study. Rather I do recommend that you use a partially or fully-managed cluster. For learning, I most often use a fully-managed (free tier) cluster.
- 1. FULLY-MANAGED - Use Databricks Community Edition (managed, hosted Apache Spark)
- example screenshot shown above
- use Databrick AWS community edition (simplest set up - free tier on AWS) - link --OR--
- use Databrick Azure trial edition
- TIP: it's simpler to try out on AWS free tier, Azure may require a pay-as-you-go account to get needed CPU/GPU resources
- uses Databricks (Jupyter-style) notebooks to connect to a small, managed Spark cluster
- creates and manages your data file buckets as part of Databricks service (on either AWS S3 or Azure Blob store)
- 2. PARTIALLY-MANAGED - Setup a Hadoop/Spark managed cloud-cluster on GCP or AWS
- see
setup-hadoop
folder in this Repo for instructions/scripts- create a GCS (or AWS) bucket for input/output job data
- see
example_datasets
folder in this Repo for sample data files
- for GCP use DataProc includes Jupyter notebook interface --OR--
- for AWS use EMR you can use EMR Studio (which includes managed Jupyter instances) - link example screenshot shown above
- for Azure it is possible to use their HDInsight service. I prefer Databricks on Azure because I find it to be more feature complete and performant.
- see
- 3. MANUAL - Setup Hadoop/Spark locally or on a 'raw' cloud VM, such as AWS EC2
- NOT RECOMMENDED for learning - too complex to set up
- Cloudera Learning VM - also NOT recommended, changes too often, documentation not aligned
EXAMPLES from org.apache.hadoop_or_spark.examples
- link for Spark examples
- Run a Hadoop WordCount Job with Java (jar file)
- Run a Hadoop and/or Spark CalculatePi (digits) Script with PySpark or other libraries
- Run using Cloudera shared demo env
- at
https://demo.gethue.com/
- login is user:
demo
, pwd:demo
- at
There are ~ 10 courses on Hadoop/Spark topics on LinkedIn Learning. See graphic below