This project utilizes Apache Spark for analyzing a dataset containing information about home sales. It includes various Spark SQL queries for data exploration and provides insights into the average prices of homes based on different criteria.
This Spark project aims to analyze home sales data using Spark SQL queries. It provides insights into the average prices of homes based on various criteria, such as the number of bedrooms, bathrooms, floors, and square footage.
Before running the code, ensure you have the following prerequisites:
- Apache Spark installed
- Python with PySpark
Install Apache Spark and set up PySpark on your machine.
Clone this repository and run the Spark script using PySpark.
The project reads data from an AWS S3 bucket into a Spark DataFrame.
A temporary view named 'home_sales' is created from the DataFrame.
Several Spark SQL queries are executed to analyze the data, including:
- Average price for a four-bedroom house sold in each year.
- Average price of a home with three bedrooms and three bathrooms.
- Average price of a home with specific criteria (bedrooms, bathrooms, floors, and square footage) for each year built.
- View rating for the average price of homes with prices greater than or equal to $350,000.
The 'home_sales' table is cached for improved performance.
The script partitions the home sales dataset by the "date_built" field and reads the formatted parquet data.
The 'home_sales' table is uncached when necessary.
Data for this dataset was generated by edX Boot Camps LLC, and is intended for educational purposes only - University of Toronto.