This is the code repository for Mastering Hadoop 3, published by Packt.
Big data processing at scale to unlock unique business insights
Apache Hadoop is one of the most popular big data solutions for distributed storage and for processing large chunks of data. With Hadoop 3, Apache promises to provide a high-performance, more fault-tolerant, and highly efficient big data processing platform, with a focus on improved scalability and increased efficiency.
This book covers the following exciting features:
- Gain an in-depth understanding of distributed computing using Hadoop 3
- Develop enterprise-grade applications using Apache Spark, Flink, and more
- Build scalable and high-performance Hadoop data pipelines with security, monitoring, and data governance
- Explore batch data processing patterns and how to model data in Hadoop
- Master best practices for enterprises using, or planning to use, Hadoop 3 as a data platform
If you feel this book is for you, get your copy today!
All of the code is organized into folders. For example, Chapter02.
The code will look like the following:
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2,nn3</value>
</property>
Following is what you need for this book: If you want to become a big data professional by mastering the advanced concepts of Hadoop, this book is for you. You’ll also find this book useful if you’re a Hadoop professional looking to strengthen your knowledge of the Hadoop ecosystem. Fundamental knowledge of the Java programming language and basics of Hadoop is necessary to get started with this book.
With the following software and hardware list you can run all code files present in the book.
Chapter | Software required | OS required |
---|---|---|
1 -15 | OpenJDK 1.8.0_171 64 bit/ | Ubuntu 16.04.3_LTS |
Apache Hadoop-3.1.0, VMWare |
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.
Click on the following link to see the Code in Action:
Chanchal Singh has over half decades experience in Product Development and Architect Design. He has been working very closely with leadership team of various companies including directors ,CTO's and Founding members to define technical road-map for company.He is the Founder and Speaker at meetup group Big Data and AI Pune MeetupExperience Speaks. He is Co-Author of Book Building Data Streaming Application with Apache Kafka. He has a Bachelor's degree in Information Technology from the University of Mumbai and a Master's degree in Computer Application from Amity University. He was also part of the Entrepreneur Cell in IIT Mumbai. His Linkedin Profile can be found at with the username Chanchal Singh.
Manish Kumar is a technical architect at DataMetica Solution Pvt. Ltd. He has approximately 11 years' experience in data management, working as a data architect and product architect. He has extensive experience of building effective ETL pipelines, implementing security over Hadoop, and providing the best possible solutions to data science problems. Before joining the world of big data, he worked as a tech lead for Sears Holding, India. Manish has a bachelor's degree in information technology, and he is a coauthor of Building Data Streaming Applications with Apache Kafka.
Click here if you have any feedback or suggestions.
If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.