This repository contains mainly notes from learning Apache Spark by Ming Chen & Wenqiang Feng. We try to use the detailed demo code and examples to show how to use pyspark for big data mining. If you find your work wasn't cited in this note, please feel free to let us know.
-
Cheat Sheets
-
Data Manipulation
- Entry Points to Spark
- RDD Object
- DataFrame Object
- RDD and DataFrame conversion
- Categorical Data,
StringIndexer
andOneHotEncoder
- Continuous variables to categorical variables
- Import and export data
- Subset data:
- select rows by index
- select rows by logical criteria
- select columns by index
- select columns by names
- select columns by regex pattern
udf()
function and SQL data types:- use
udf()
function - difference between
ArrayType
andStructType
- use
- Pipeline
- Dense and sparse vectors
- Assemble feature columns into a
featuresCol
column withVectorAssembler
- TF-IDF, HashingTF and CountVectorizer
- Feature processing:
- SQL functions
- Add py Files to cluster
-
Machine Learning
-
Model Tuning
-
Nutural Language Processing
At here, we would like to thank Jian Sun and Zhongbo Li at the University of Tennessee at Knoxville for the valuable disscussion and thank the generous anonymous authors for providing the detailed solutions and source code on the internet. Without those help, this repository would not have been possible to be made. Wenqiang also would like to thank the Institute for Mathematics and Its Applications (IMA) at University of Minnesota, Twin Cities for support during his IMA Data Scientist Fellow visit.
Your comments and suggestions are highly appreciated. We are more than happy to receive corrections, suggestions or feedbacks through email (Ming Chen: mchen33@utk.edu, Wenqiang Feng: wfeng1@utk.edu) for improvements.