/Apache-Spark-Tutorial

Tutorial on Apache Spark in Python (PySpark) and its integration with ElasticSearch

Primary LanguageJupyter NotebookGNU General Public License v2.0GPL-2.0

Apache-Spark-Tutorial

Spark is well known cluster computing framework. At the lowest level, we work with RDD(Resilient distributed dataset). It also includes higher level modules such as SparkSQL, Dataframes, GraphX (graph processing), MLlib (machine learning), and Streaming.


This tutorial focuses on working with normal and key-value RDDs using Python programming language. PySpark is the python version of Spark.

  1. PySparkTutorial.ipynb: This tutorial demostrates the following -
  • How to configure Spark and standalone Spark cluster on local machine.
  • Creation of SparkContext.
  • Various RDD (Normal & Key-Value RDD) transformations and actions with examples.
  • Performance related such as accumulators and broadcast variables.
  1. ElasticSearch_PySpark_Integration.ipynb -
  • Integration of ElasticSearch with Spark.
  • Analysis of Network traffic data using ES and Spark to detect 3-sigma deviation connections.