In this project, we try to help one music streaming startup, Sparkify, to move their date warehouse to a data lake. Specifically, I bulid an ETF pipline to extract their data from S3 and processes them using Spark, and loads the data into a new S3 as a set of dimensional tables. This will allow their analytics team to continue finding insights in what songs their users are listening to.
Datasets used in this project are provided in two public S3 buckets. One bucket contains info about songs and artists, the second bucket has info concerning actions done by users (which song are listening, etc.. ). The objects contained in both buckets are JSON files.
I createa a star schema optimized for queries on song play analysis as follows:
This includes the following tables.
- songplays - records in event data associated with song plays i.e. records with page
NextSong
- users - users in the app
- songs - songs in music database
- artists - artists in music database
- time - timestamps of records in songplays broken down into specific units
The ETL job processes the song files
then the log files
. The song files
are listed and iterated over entering relevant information in the artists and the song folders in parquet. The log files
are filtered by the NextSong action. The subsequent dataset is then processed to extract the date, time, year etc. fields and records are then appropriately entered into the time
, users
and songplays
folders in parquet for analysis.
etl.py
- The ETL to reads data from S3, processes that data using Spark, and writes them to a new S3dl.cfg
- Configuration file that contains info about AWS credentialstest.ipynb
- test file foretl.py