/LearningSpark

Scala examples for learning to use Spark

Primary LanguageScalaMIT LicenseMIT

The LearningSpark Project

This project contains snippets of Scala code for illustrating various Apache Spark concepts. It is intended to help you get started with learning Apache Spark (as a Scala programmer) by providing a super easy on-ramp that doesn't involve Unix, cluster configuration, building from sources or installing Hadoop. Many of these activities will be necessary later in your learning experience, after you've used these examples to achieve basic familiarity.

It is intended to accompany a number of posts on the blog A River of Bytes.

Dependencies

The project was created with IntelliJ Idea 14 Community Edition, JDK 1.7, Scala 2.11.2 and Spark 1.3.0 on Windows 8.

Versions of these examples for other configurations (older versions of Scala and Spark) can be found in various branches.

Examples

The examples can be found under src/main/scala. The best way to use them is to start by reading the code and its comments. Then, since each file contains an object definition with a main method, run it and consider the output. Relevant blog posts and StackOverflow answers are listed below.

Package File What's Illustrated
Ex1_SimpleRDD How to execute your first, very simple, Spark Job. See also An easy way to start learning Spark.
Ex2_Computations How RDDs work in more complex computations. See also Spark computations.
Ex3_CombiningRDDs Operations on multiple RDDs
Ex4_MoreOperationsOnRDDs More complex operations on individual RDDs
Ex5_Partitions Explicit control of partitioning for performance and scalability.
Ex6_Accumulators How to use Spark accumulators to efficiently gather the results of distributed computations.
hiveql Using HiveQL features in a HiveContext. See the local README.md in that directory for details.
special CustomPartitioner How to take control of the partitioning of an RDD.
special HashJoin How to use the well known Hash Join algorithm to join two RDDs where one is small enough to entirely fit in the memory of each partition. See also this question on StackOverflow
special PairRDD How to operate on RDDs in which the underlying elements are pairs.
dataframe A range of DataFrame examples -- see the local README.md in that directory for details.
experiments Experimental examples that may or may not lead to anything interesting.
sql A range of SQL examples -- see the local README.md in that directory for details.
streaming FileBased Streaming from a sequence of files.
streaming QueueBased Streaming from a queue.
streaming Accumulation Accumulating stream data in a single RDD.
streaming Windowing Maintaining a sliding window on the most recent stream data.

Additional Scala code is "work in progress".