Extensions to Linear Models - Lab
Introduction
In this lab, you'll practice many concepts you have learned so far, from adding interactions and polynomials to your model to AIC and BIC!
Summary
You will be able to:
- Build a linear regression model with interactions and polynomial features
- Use AIC and BIC to select the best value for the regularization parameter
Let's get started!
Import all the necessary packages.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from itertools import combinations
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
from sklearn.preprocessing import PolynomialFeatures
Load the data.
df = pd.read_csv("ames.csv")
df = df[['LotArea', 'OverallQual', 'OverallCond', 'TotalBsmtSF',
'1stFlrSF', '2ndFlrSF', 'GrLivArea', 'TotRmsAbvGrd',
'GarageArea', 'Fireplaces', 'SalePrice']]
Look at a baseline housing data model
Above, we imported the Ames housing data and grabbed a subset of the data to use in this analysis.
Next steps:
- Split the data into target (
y
) and predictors (X
) -- ensure these both are DataFrames - Scale all the predictors using
scale
. Convert these scaled features into a DataFrame - Build at a baseline model using scaled variables as predictors. Use 5-fold cross-validation (set
random_state
to 1) and use the$R^2$ score to evaluate the model
# Your code here
Include interactions
Look at all the possible combinations of variables for interactions by adding interactions one by one to the baseline model. Next, evaluate that model using 5-fold cross-validation and store the
Print the 7 most important interactions.
# Your code here
Write code to include the 7 most important interactions in your data set by adding 7 columns. Name the columns "var1_var2", where var1 and var2 are the two variables in the interaction.
# Your code here
Include polynomials
Try polynomials of degrees 2, 3, and 4 for each variable, in a similar way you did for interactions (by looking at your baseline model and seeing how
(var_name, degree, R2)
, so eg. ('OverallQual', 2, 0.781)
# Your code here
For each variable, print out the maximum
# Your code here
Which two variables seem to benefit most from adding polynomial terms?
Add Polynomials for the two features that seem to benefit the most, as in have the best df_inter
so the final data set has both interactions and polynomials in the model.
# Your code here
Check out your final data set and make sure that your interaction terms as well as your polynomial terms are included.
# Your code here
Full model R-squared
Check out the
# Your code here
Find the best Lasso regularization parameter
You learned that when using Lasso regularization, your coefficients shrink to 0 when using a higher regularization parameter. Now the question is which value we should choose for the regularization parameter.
This is where the AIC and BIC come in handy! We'll use both criteria in what follows and perform cross-validation to select an optimal value of the regularization parameter
Read the page here: https://scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_model_selection.html and create a similar plot as the first one listed on the page.
from sklearn.linear_model import Lasso, LassoCV, LassoLarsCV, LassoLarsIC
# Your code here
Analyze the final result
Finally, use the best value for the regularization parameter according to AIC and BIC, and compare
Remember, you can find the Root Mean Squared Error (RMSE) by setting squared=False
inside the function (see the documentation), and the RMSE returns values that are in the same units as our target - so we can see how far off our predicted sale prices are in dollars.
from sklearn.metrics import mean_squared_error, mean_squared_log_error
from sklearn.model_selection import train_test_split
# Split X_scaled and y into training and test sets
# Set random_state to 1
X_train, X_test, y_train, y_test = None
# Code for baseline model
linreg_all = None
# Print R-Squared and RMSE
# Split df_inter and y into training and test sets
# Set random_state to 1
X_train, X_test, y_train, y_test = None
# Code for lasso with alpha from AIC
lasso = None
# Print R-Squared and RMSE
# Code for lasso with alpha from BIC
lasso = None
# Print R-Squared and RMSE
Level up (Optional)
Create a Lasso path
From this section, you know that when using Lasso, more parameters shrink to zero as your regularization parameter goes up. In Scikit-learn there is a function lasso_path()
which visualizes the shrinkage of the coefficients while
AIC and BIC for subset selection
This notebook shows how you can use AIC and BIC purely for feature selection. Try this code out on our Ames housing data!
https://xavierbourretsicotte.github.io/subset_selection.html
Summary
Congratulations! You now know how to create better linear models and how to use AIC and BIC for both feature selection and to optimize your regularization parameter when performing Ridge and Lasso.