Unless you already have a working Apache Spark cluster, you will need to have Docker for simple environment setup.
The provided docker-compose.yml and Spark configurations in conf directory are cloned from https://github.com/gettyimages/docker-spark.
- Make sure Docker is installed properly and
docker-composeis ready to use - Run
$ docker-compose up -dunder thedata-mrdirectory - Check Spark UI at
http://localhost:8080and you should see 1 master and 1 worker - Run
$ docker exec -it datamr_master_1 /bin/bashto get into the container shell, and start utilizing Spark commands such as# spark-shell,# pysparkor# spark-submit. You may want to replacedatamr_master_1with actual container name that's spawned by thedocker-composeprocess
If you're not already familiar with Apache Spark, you'll need to go through its documentation for available APIs. The version that comes with the Docker Spark setup depends on https://github.com/gettyimages/docker-spark.
For jobs that rely on external dependencies and libraries, make sure they are properly packaged on submission.
On submission, we will need:
- Source code of the solution
- Build instructions for job packaging (unless your solution is a single
.py), such as Maven or SBT for Scala/Java, orsetup.pyfor Python.zip/.egg
Make sure the jobs can be submitted (through spark-submit command) in the Spark Master container shell. There is a data directory provided that maps between the Spark Master container and your host system, which is accessible as /tmp/data within the Docker container -- this is where you want to place both your jobs and work sample data, the latter is already included.
