Regression Model Validation - Lab

Introduction

In this lab, you'll be able to validate your Ames Housing data model using train-test split.

Objectives

You will be able to:

  • Compare training and testing errors to determine if model is over or underfitting

Let's use our Ames Housing Data again!

We included the code to preprocess below.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

ames = pd.read_csv('ames.csv')

# using 9 predictive categorical or continuous features, plus the target SalePrice
continuous = ['LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']
categoricals = ['BldgType', 'KitchenQual', 'SaleType', 'MSZoning', 'Street', 'Neighborhood']

ames_cont = ames[continuous]

# log features
log_names = [f'{column}_log' for column in ames_cont.columns]

ames_log = np.log(ames_cont)
ames_log.columns = log_names

# normalize (subract mean and divide by std)

def normalize(feature):
    return (feature - feature.mean()) / feature.std()

ames_log_norm = ames_log.apply(normalize)

# one hot encode categoricals
ames_ohe = pd.get_dummies(ames[categoricals], prefix=categoricals, drop_first=True)

preprocessed = pd.concat([ames_log_norm, ames_ohe], axis=1)
X = preprocessed.drop('SalePrice_log', axis=1)
y = preprocessed['SalePrice_log']

Perform a train-test split

# Split the data into training and test sets. Use the default split size

Apply your model to the train set

# Import and initialize the linear regression model class
# Fit the model to train data

Calculate predictions on training and test sets

# Calculate predictions on training and test sets

Calculate training and test residuals

# Calculate residuals

Calculate the Mean Squared Error (MSE)

A good way to compare overall performance is to compare the mean squarred error for the predicted values on the training and test sets.

# Import mean_squared_error from sklearn.metrics
# Calculate training and test MSE

If your test error is substantially worse than the train error, this is a sign that the model doesn't generalize well to future cases.

One simple way to demonstrate overfitting and underfitting is to alter the size of our train-test split. By default, scikit-learn allocates 25% of the data to the test set and 75% to the training set. Fitting a model on only 10% of the data is apt to lead to underfitting, while training a model on 99% of the data is apt to lead to overfitting.

Evaluate the effect of train-test split size

Iterate over a range of train-test split sizes from .5 to .95. For each of these, generate a new train/test split sample. Fit a model to the training sample and calculate both the training error and the test error (mse) for each of these splits. Plot these two curves (train error vs. training size and test error vs. training size) on a graph.

# Your code here

Evaluate the effect of train-test split size: Extension

Repeat the previous example, but for each train-test split size, generate 10 iterations of models/errors and save the average train/test error. This will help account for any particularly good/bad models that might have resulted from poor/good splits in the data.

# Your code here

What's happening here? Evaluate your result!

Summary

Congratulations! You now practiced your knowledge of MSE and used your train-test split skills to validate your model.