This is the code repository for Optimizing Databricks Workloads, published by Packt.
Harness the power of Apache Spark in Azure and maximize the performance of modern big data workloads
Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud.First Paragraph from the Long Description
This book covers the following exciting features: <First 5 What you'll learn points>
- Get to grips with Spark fundamentals and the Databricks platform
- Process big data using the Spark DataFrame API with Delta Lake
- Analyze data using graph processing in Databricks
- Use MLflow to manage machine learning life cycles in Databricks
- Find out how to choose the right cluster configuration for your workloads
If you feel this book is for you, get your copy today!
All of the code is organized into folders. For example, Chapter02.
The code will look like the following:
airlines_1987_to_2008.select('Year').distinct().collect()
Following is what you need for this book: This book is for data engineers, data scientists, and cloud architects who have working knowledge of Spark/Databricks and some basic understanding of data engineering principles. Readers will need to have a working knowledge of Python, and some experience of SQL in PySpark and Spark SQL is beneficial.
With the following software and hardware list you can run all code files present in the book (Chapter 1-15).
Chapter | Software required | OS required |
---|---|---|
1 | Azure Databricks (Chrome, FireFox, Edge, or Safari | Windows, macOS X, or linux |
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.
Anirudh Kala is an expert in machine learning techniques, artificial intelligence, and natural language processing. He has helped multiple organizations to run their large-scale data warehouses with quantitative research, natural language generation, data science exploration, and big data implementation. He has worked in every aspect of data analytics using the Azure data platform. Currently, he works as the director of Celebal Technologies, a data science boutique firm dedicated to large-scale analytics. Anirudh holds a computer engineering degree from the University of Rajasthan and his work history features the likes of IBM and ZS Associates.
Anshul Bhatnagar is an experienced, hands-on data architect involved in the architecture, design, and implementation of data platform architectures, and distributed systems. He has worked in the IT industry since 2015 in a range of roles such as Hadoop/Spark developer, data engineer, and data architect. He has also worked in many other sectors including energy, media, telecoms, and e-commerce. He is currently working for a data and AI boutique company, Celebal Technologies, in India. He is always keen to hear about new ideas and technologies in the areas of big data and AI, so look him up on LinkedIn to ask questions or just to say hi.
Sarthak Sarbahi is a certified data engineer and analyst with a wide technical breadth and a deep understanding of Databricks. His background has led him to a variety of cloud data services with an eye toward data warehousing, big data analytics, robust data engineering, data science, and business intelligence. Sarthak graduated with a degree in mechanical engineering.
If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.