This project implements a data pipeline based on the Medallion Architecture, leveraging Microsoft Azure services including Azure Data Factory, Azure Databricks and DBT (Data Build Tool). The pipeline facilitates the efficient extraction, transformation and loading (ETL) of data, enabling seamless data processing & analysis.
The Medallion Architecture is a data processing framework designed to ensure scalability, reliability & maintainability of data pipelines. Our implementation utilizes the following components:
-
Azure Data Factory: Orchestrates and automates data movement & transformation workflows. It provides a visual interface for constructing, monitoring & managing data pipelines.
-
Azure Databricks: A unified analytics platform that integrates with Azure services for big data processing. Databricks clusters enable scalable data processing using Apache Spark & it's notebooks facilitate collaborative development and execution of data transformation logic.
-
DBT (Data Build Tool): A command line tool that enables the transformation of data in your warehouse more effectively. It is specifically designed for those who want to build code that's modular, verifiable & optimized for change.
-
Modular Pipeline: The pipeline is modular, allowing easy addition or modification of data sources, transformations and destinations.
-
Scalability: Leveraging Azure services ensures scalability to handle large volumes of data & varying workloads.
-
Automated Workflow: Data movement, transformation and orchestration are automated, thereby reducing manual intervention & potential errors.
-
Version Control: DBT enables version control of data transformation logic, promoting collaboration and ensuring reproducibility.
To get started with this pipeline, follow the steps mentioned in the Procedure.pdf file. Feel free to make modifications in the data flow structure while creating your own pipeline.