For a limited period, all eBooks and Videos are only $10. All the practical content you need - by developers, for developers
This is the code repository for Data Engineering with Apache Spark, Delta Lake, and Lakehouse, published by Packt.
Create scalable pipelines that ingest, curate, and aggregate complex data in a timely and secure way
In the world of ever-changing data and schemas, it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on.
This book covers the following exciting features:
- Discover the challenges you may face in the data engineering world
- Add ACID transactions to Apache Spark using Delta Lake
- Understand effective design strategies to build enterprise-grade data lakes
- Explore architectural and design patterns for building efficient data ingestion pipelines
- Orchestrate a data pipeline for preprocessing data using Apache Spark and Delta Lake APIs
If you feel this book is for you, get your copy today!
All of the code is organized into folders. For example, Chapter02.
The code will look like the following:
const df = new DataFrame({...})
df.plot("my_div_id").<chart type>
Following is what you need for this book: This book is for aspiring data engineers and data analysts who are new to the world of data engineering and are looking for a practical guide to building scalable data platforms. If you already work with PySpark and want to use Delta Lake for data engineering, you'll find this book useful. Basic knowledge of Python, Spark, and SQL is expected.
With the following software and hardware list you can run all code files present in the book (Chapter 1-12).
Chapter | Software required | OS required |
---|---|---|
1 - 12 | Azure | Windows, Mac OS X, and Linux (Any) |
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.
Manoj Kukreja is a Principal Architect at Northbay Solutions who specializes in creating complex Data Lakes and Data Analytics Pipelines for large-scale organizations such as banks, insurance companies, universities, and US/Canadian government agencies. Previously, he worked for Pythian, a large managed service provider where he was leading the MySQL and MongoDB DBA group and supporting large-scale data infrastructure for enterprises across the globe. With over 25 years of IT experience, he has delivered Data Lake solutions using all major cloud providers including AWS, Azure, GCP, and Alibaba Cloud. On weekends, he trains groups of aspiring Data Engineers and Data Scientists on Hadoop, Spark, Kafka and Data Analytics on AWS and Azure Cloud.