In this lab, we'll install the popular XGBoost Library and explore how to use this popular boosting model to classify different types of wine using the Wine Quality Dataset from the UCI Machine Learning Dataset Repository. In this lesson, we'll install XGBoost on our machines, and then we'll use it make some classifications on the Wine Quality Dataset!
You will be able to:
- Understand the general difference between XGBoost and other ensemble algorithms such as AdaBoost
- Install and use XGboost
The XGBoost model is not currently included in scikit-learn, so we'll have to install it on our own. To install XGBoost, you'll need to use conda.
To install XGBoost, follow these steps:
- Open up a new terminal window.
- Activate your conda environment
- Run
conda install py-xgboost
. You must use conda to install this package--currently, it cannot be installed usingpip
. - Once installation has completed, run the cell below to verify that everything worked.
import xgboost as xgb
Run the cell below to import everything we'll need for this lab.
import pandas as pd
import numpy as np
np.random.seed(0)
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV
%matplotlib inline
The dataset we'll be using for this lab is currently stored in the file winequality-red.csv
.
In the cell below, use pandas to import the dataset into a dataframe, and inspect the head of the dataframe to ensure everything loaded correctly.
df = None
For this lab, our target variable will be quality
. That makes this a multiclass classification problem. Given the data in the columns from fixed_acidity
through alcohol
, we'll predict the quality
of the wine.
This means that we need to store our target variable separately from the dataset, and then split the data and labels into training and testing sets that we can use for cross-validation.
In the cell below:
- Store the
quality
column in thelabels
variable and then remove the column from our dataset. - Create a
StandardScaler
object and scale the data using thefit_transform()
method. - Split the data into training and testing sets using the appropriate method from sklearn.
labels = None
labels_removed_df = None
scaler = None
scaled_df = None
X_train, X_test, y_train, y_test = None
Now that we have prepared our data for modeling, we can use XGBoost to build a model that can accurately classify wine quality based on the features of the wine!
The API for xgboost is purposefully written to mirror the same structure as other models in scikit-learn.
clf = None
clf.fit(None, None)
training_preds = None
val_preds = None
training_accuracy = None
val_accuracy = None
print("Training Accuracy: {:.4}%".format(training_accuracy * 100))
print("Validation accuracy: {:.4}%".format(val_accuracy * 100))
Our model had somewhat lackluster performance on the testing set compared to the training set, suggesting the model is beginning to overfit the training data. Let's tune the model to increase the model performance and prevent overfitting.
For a full list of model parameters, see the XGBoost Documentation.
Many of the parameters we'll be tuning are parameters we've already encountered when working with Decision Trees, Random Forests, and Gradient Boosted Trees.
Examine the tunable parameters for XGboost, and then fill in appropriate values for the param_grid
dictionary in the cell below. Put values you want to test out for each parameter inside the corresponding arrays in param_grid
.
NOTE: Remember, GridSearchCV
finds the optimal combination of parameters through an exhaustive combinatoric search. If you search through too many parameters, the model will take forever to run! For the sake of time, we recommend trying no more than 3 values per parameter for the following steps.
param_grid = {
"learning_rate": None,
'max_depth': None,
'min_child_weight': None,
'subsample': None,
'n_estimators': None,
}
Now that we have constructed our params
dictionary, create a GridSearchCV
object in the cell below and use it to iterate tune our XGBoost model.
Now, in the cell below:
- Create a
GridSearchCV
object. Pass in the following parameters:clf
, our classifierparam_grid
, the dictionary of parameters we're going to grid search throughscoring='accuracy'
cv=None
n_jobs=1
- Fit our
grid_clf
object and pass inX_train
andy_train
- Store the best parameter combination found by the grid search in
best_parameters
. You can find these inside the Grid Search object's.best_params_
attribute. - Use
grid_clf
to create predictions for the training and testing sets, and store them in separate variables. - Compute the accuracy score for the training and testing predictions.
grid_clf = None
grid_clf.fit(None, None)
best_parameters = None
print("Grid Search found the following optimal parameters: ")
for param_name in sorted(best_parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
training_preds = None
val_preds = None
training_accuracy = None
val_accuracy = None
print("")
print("Training Accuracy: {:.4}%".format(training_accuracy * 100))
print("Validation accuracy: {:.4}%".format(val_accuracy * 100))
That's a big improvement! You should see that your accuracy has increased by 10-15%, as well as no more signs of the model overfitting.
Great! We've now successfully made use of one of the most powerful Boosting models in data science for modeling. We've also learned how to tune the model for better performance using the Grid Search methodology we learned previously. XGBoost is a powerful modeling tool to have in your arsenal. Don't be afraid to experiment with it when modeling.