In this lab, you'll be able to validate your Ames Housing data model using a train-test split.
You will be able to:
- Perform a train-test split
- Prepare training and testing data for modeling
- Compare training and testing errors to determine if model is over or underfitting
We included the code to load the data below.
# Run this cell without changes
import pandas as pd
import numpy as np
ames = pd.read_csv('ames.csv', index_col=0)
ames
Use train_test_split
(documentation here) with the default split size. At the end you should have X_train
, X_test
, y_train
, and y_test
variables, where y
represents SalePrice
and X
represents all other columns.
# Your code here: split the data into training and test sets
This code is completed for you and should work as long as the correct variables were created.
# Run this cell without changes
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder
continuous = ['LotArea', '1stFlrSF', 'GrLivArea']
categoricals = ['BldgType', 'KitchenQual', 'Street']
# Instantiate transformers
log_transformer = FunctionTransformer(np.log, validate=True)
ohe = OneHotEncoder(drop='first', sparse=False)
# Fit transformers
log_transformer.fit(X_train[continuous])
ohe.fit(X_train[categoricals])
# Transform training data
X_train = pd.concat([
pd.DataFrame(log_transformer.transform(X_train[continuous]), index=X_train.index),
pd.DataFrame(ohe.transform(X_train[categoricals]), index=X_train.index)
], axis=1)
# Transform test data
X_test = pd.concat([
pd.DataFrame(log_transformer.transform(X_test[continuous]), index=X_test.index),
pd.DataFrame(ohe.transform(X_test[categoricals]), index=X_test.index)
], axis=1)
# Your code here: import the linear regression model class, initialize a model
# Your code here: fit the model to train data
# Your code here: generate predictions for both sets
You can use mean_squared_error
from scikit-learn (documentation here).
# Your code here: calculate training and test MSE
If your test error is substantially worse than the train error, this is a sign that the model doesn't generalize well to future cases.
One simple way to demonstrate overfitting and underfitting is to alter the size of our train-test split. By default, scikit-learn allocates 25% of the data to the test set and 75% to the training set. Fitting a model on only 10% of the data is apt to lead to underfitting, while training a model on 99% of the data is apt to lead to overfitting.
Iterate over a range of train-test split sizes from .5 to .9. For each of these, generate a new train/test split sample. Preprocess both sets of data. Fit a model to the training sample and calculate both the training error and the test error (MSE) for each of these splits. Plot these two curves (train error vs. training size and test error vs. training size) on a graph.
# Your code here
Repeat the previous example, but for each train-test split size, generate 10 iterations of models/errors and save the average train/test error. This will help account for any particularly good/bad models that might have resulted from poor/good splits in the data.
# Your code here
What's happening here? Evaluate your result!
Congratulations! You now practiced your knowledge of MSE and used your train-test split skills to validate your model.