Scale your data management by distributing workload and storage on Hadoop and Spark Clusters, explore and transform your data in Jupyter Notebook.
Purpose for this tutorial is to show how to get started with Hadoop, Spark and Jupyter for your BigData solution, deployed as Docker Containers.
- Apple Silicon might use arm64 branch to install.
- Ensure Docker is installed.
Execute bash master-build.sh
to start the the build and start the containers.
Execute bash master-delete.sh
to stop the containers.
Access Hadoop UI on ' http://localhost:9870 '
Access Spark Master UI on ' http://localhost:8080 '
Access Jupyter UI on ' http://localhost:8888 '