Kaggle Competition
In this notebook, we're going to go through an example machine learning project with the goal of predicting the sale price of bulldozers.
Since we're trying to predict a number, this kind of problem is known as a regression problem.
The data and evaluation metric we'll be using (root mean square log error or RMSLE) is from the Kaggle Bluebook for Bulldozers competition.
The techniques used in here have been inspired and adapted from the fast.ai machine learning course. What we'll end up with
To work through these topics, we'll use pandas, Matplotlib and NumPy for data anaylsis, as well as, Scikit-Learn for machine learning and modelling tasks. Tools which can be used for each step of the machine learning modelling process.
We'll work through each step and by the end of the notebook, we'll have a trained machine learning model which predicts the sale price of a bulldozer given different characteristics about it.
For this dataset, the problem we're trying to solve, or better, the question we're trying to answer is,
How well can we predict the future sale price of a bulldozer, given its characteristics previous examples of how much similar bulldozers have been sold for?
Looking at the dataset from Kaggle, you can you it's a time series problem. This means there's a time attribute to dataset.
In this case, it's historical sales data of bulldozers. Including things like, model type, size, sale date and more.
Here's the link to the data
There are 3 datasets:
Train.csv - Historical bulldozer sales examples up to 2011 (close to 400,000 examples with 50+ different attributes, including SalePrice which is the target variable). Valid.csv - Historical bulldozer sales examples from January 1 2012 to April 30 2012 (close to 12,000 examples with the same attributes as Train.csv). Test.csv - Historical bulldozer sales examples from May 1 2012 to November 2012 (close to 12,000 examples but missing the SalePrice attribute, as this is what we'll be trying to predict).
For this problem, Kaggle has set the evaluation metric to being root mean squared log error (RMSLE). As with many regression evaluations, the goal will be to get this value as low as possible.
To see how well our model is doing, we'll calculate the RMSLE and then compare our results to others on the Kaggle leaderboard.
Features are different parts of the data. During this step, you'll want to start finding out what you can about the data.
One of the most common ways to do this, is to create a data dictionary.
For this dataset, Kaggle provide a data dictionary which contains information about what each attribute of the dataset means. You can download this file directly from the Kaggle competition page (account required) or view it on Google Sheets.
With all of this being known, let's get started!
First, we'll import the dataset and start exploring. Since we know the evaluation metric we're trying to minimise, our first goal will be building a baseline model and seeing how it stacks up against the competition.