Data Warehouses with AWS's S3
Introduction
This project aims to demonstrate the power of data warehouses with AWS S3's Redshift (based on Postgres).
The scripts available within the project allow an user to do an ETL process starting by creating all the necessary tables of types: fact, dimensional and staging using 2 datasets: the Million Song Dataset and the Log Dataset -- log files generated by an event simulator based on the Million Song Dataset.
Database schema
This database schema encompasses the following tables from two distinct data sets to lastly form a fact table:
- song dataset:
- staging_songs table: staging/intermediate table to perform ETL and load the song dataset;
- song table: dimensional table that contains data from the available songs such as year, duration, title, song ID (PK) and artist ID (FK);
- artists table: dimensional table contains data related to the artists from the songs such as artist ID (PK), artist name, and artist's latitude and logitude;
- log dataset (data refers to user interaction data):
- staging_events table: staging/intermediate table to perform ETL and load the log dataset;
- time table: dimensional table containing data related to the times when users when users were listening to music;
- users table: dimensional table containing user data such as first and last name, gender and subscription level.
Lastly, the fact table that gathers the data from these two data sets is called songplay table. It contains FKs such as artist Id, song Id and user Id, as well as timestamp of when the song was played and location.
Requirements
- Install python 3.x.x;
- Create an AWS S3's Redshift cluster;
- Create an IAM role for the cluster;
- Run
create_tables.py
to create all the staging, fact and dimensional tables that we need; - Run
sql_queries.py
to load all the data sets to the staging tables and later afterwards query the staging tables to populate the facts and the dimensional tables