/ApacheSpark

This repository will help you to learn about databricks concept with the help of examples. It will include all the important topics which we need in our real life experience as a data engineer.

Primary LanguagePython

Introduction

This course include multiple sections. We are mainly focusing on Databricks Data Engineer certification exam. We have following tutorials:

  1. Spark SQL ETL
  2. Pyspark ETL

DATASETS

All the datasets used in the tutorials are available at: https://github.com/martandsingh/datasets

Spark SQL

This course is the first installment of databricks data engineering course. In this course you will learn basic SQL concept which include:

  1. Create, Select, Update, Delete tables
  2. Create database
  3. Filtering data
  4. Group by & aggregation
  5. Ordering
  6. SQL joins
  7. Common table expression (CTE)
  8. External tables
  9. Sub queries
  10. Views & temp views
  11. UNION, INTERSECT, EXCEPT keywords

PySpark ETL

This course will teach you how to perform ETL pipelines using pyspark. ETL stands for Extract, Load & Transformation. We will see how to load data from various sources & process it and finally will load the process data to our destination.

This course includes:

  1. Read files
  2. Schema handling
  3. Handling JSON files
  4. Write files
  5. Basic transformations
  6. partitioning
  7. caching
  8. joins
  9. missing value handling
  10. Data profiling
  11. date time functions
  12. string function
  13. deduplication
  14. grouping & aggregation
  15. User defined functions
  16. Ordering data
  17. Case study - sales order analysis

you can download all the notebook from our

github repo: https://github.com/martandsingh/ApacheSpark

facebook: https://www.facebook.com/codemakerz

email: martandsays@gmail.com

SETUP folder

you will see initial_setup & clean_up notebooks called in every notebooks. It is mandatory to run both the scripts in defined order. initial script will create all the mandatory tables & database for the demo. After you finish your notebook, execute clean up notebook, it will clean all the db objects.

Databricks_Logo