/udacity-DEND-airflow

Data Pipelines with Airflow course project

Primary LanguagePython

Data Pipeline with Airflow

Overview

A music streaming company, Sparkify, has decided that it is time to introduce more automation and monitoring to their data warehouse ETL pipelines and come to the conclusion that the best tool to achieve this is Apache Airflow.

Project Description

This project creates a high grade data pipeline that is dynamic and built from reusable tasks, can be monitored, and allow easy backfills. As data quality plays a big part when analyses are executed on top the data warehouse, thus the pipeline runs tests against Sparkify's datasets after the ETL steps have been executed to catch any discrepancies in the datasets.

The source data resides in S3 and needs to be processed in Sparkify's data warehouse in Amazon Redshift. The source datasets consist of JSON logs that tell about user activity in the application and JSON metadata about the songs the users listen to.

Datasets

Here are the s3 links for datasets used in this project:

Log data: s3://udacity-dend/log_data Song data: s3://udacity-dend/song_data

Structure

Project has two directories named dags and plugins. A create tables script and readme file are at root level:

  • create_tables.sql: SQL create table statements provided with template.

dags directory contains:

  • sparkify_etl_dag.py: Defines main DAG, tasks and link the tasks in required order.

plugins/operators directory contains:

  • stage_redshift.py: Defines StageToRedshiftOperator to copy JSON data from S3 to staging tables in the Redshift via copy command.
  • load_dimension.py: Defines LoadDimensionOperator to load a dimension table from staging table(s).
  • load_fact.py: Defines LoadFactOperator to load fact table from staging table(s).
  • data_quality.py: Defines DataQualityOperator to run data quality checks on all tables passed as parameter.
  • sql_queries.py: Contains SQL queries for the ETL pipeline (provided in template).

Config

This code uses python 3 and assumes that Apache Airflow is installed and configured.

  • Create a Redshift cluster and run create_tables.sql there for once only.
  • Make sure to add following two Airflow connections:
    • AWS credentials, named aws_credentials
    • Connection to Redshift, named redshift