Description: Code snippets and tutorials for working with SparkR.
The tutorials included in this repository are geared towards social scientists and policy researchers that want to undertake research using "big data" sets. A manual to accompany these tutorials is linked below. The objective of the manual is to provide social scientists with a brief overview of the distributed computing solution developed by The Urban Institute's Research Programming Team, and of the changes in how researchers manage and analyze data required by this computing environment.
Last Updated: May 23, 2017
In order to begin working with SparkR, users must first:
- Make sure that
SPARK_HOME
is set in environment (usingSys.getenv
) - Load the
SparkR
library - Initiate a
sparkR.session
if (nchar(Sys.getenv("SPARK_HOME")) < 1) {
Sys.setenv(SPARK_HOME = "/home/spark")
}
# Load the SparkR library
library(SparkR)
# Initiate a SparkR session
sparkR.session()
❗ The expressions given above must be evaluated in SparkR before beginning any of the tutorials hosted here. Example data loading is included in each tutorial.
Users can end a SparkR session with the following expression:
sparkR.session.stop()
Note: the data visualization tutorial, linked to below, is currently not updated for SparkR 2.0, but does still function with SparkR 1.6.
-
Time Series I: Working with the Date Datatype & Resampling a DataFrame
Created by gh-md-toc