scikit-learn models hyperparameters tuning, using evolutionary algorithms.
This is meant to be an alternative from popular methods inside scikit-learn such as Grid Search and Randomized Grid Search.
Sklearn-genetic-opt uses evolutionary algorithms from the DEAP package to choose the set of hyperparameters that optimizes (max or min) the cross-validation scores, it can be used for both regression and classification problems.
Documentation is available here
Sampled distribution of hyperparameters:
Optimization progress in a regression problem:
- GASearchCV: Principal class of the package, holds the evolutionary cross validation optimization routine.
- Algorithms: Set of different evolutionary algorithms to use as optimization procedure.
- Callbacks: Custom evaluation strategies to generate early stopping rules, logging or custom logic.
- Plots: Generate pre-define plots to understand the optimization process.
- MLflow: Build-in integration with mlflow to log all the hyperparameters, cv-scores and the fitted models.
Install sklearn-genetic-opt
It's advised to install sklearn-genetic using a virtual env, inside the env use:
pip install sklearn-genetic-opt
from sklearn_genetic import GASearchCV
from sklearn_genetic.space import Continuous, Categorical, Integer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.datasets import load_digits
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
data = load_digits()
n_samples = len(data.images)
X = data.images.reshape((n_samples, -1))
y = data['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
clf = RandomForestClassifier()
param_grid = {'min_weight_fraction_leaf': Continuous(0.01, 0.5, distribution='log-uniform'),
'bootstrap': Categorical([True, False]),
'max_depth': Integer(2, 30),
'max_leaf_nodes': Integer(2, 35),
'n_estimators': Integer(100, 300)}
cv = StratifiedKFold(n_splits=3, shuffle=True)
evolved_estimator = GASearchCV(estimator=clf,
cv=cv,
scoring='accuracy',
population_size=10,
generations=35,
param_grid=param_grid,
n_jobs=-1,
verbose=True,
keep_top_k=4)
# Train and optimize the estimator
evolved_estimator.fit(X_train, y_train)
# Best parameters found
print(evolved_estimator.best_params_)
# Use the model fitted with the best parameters
y_predict_ga = evolved_estimator.predict(X_test)
print(accuracy_score(y_test, y_predict_ga))
# Saved metadata for further analysis
print("Stats achieved in each generation: ", evolved_estimator.history)
print("Best k solutions: ", evolved_estimator.hof)
Log controlled by verbosity
See the changelog for notes on the changes of Sklearn-genetic-opt
- Official source code repo: https://github.com/rodrigo-arenas/Sklearn-genetic-opt/
- Download releases: https://pypi.org/project/sklearn-genetic-opt/
- Issue tracker: https://github.com/rodrigo-arenas/Sklearn-genetic-opt/issues
- Stable documentation: https://sklearn-genetic-opt.readthedocs.io/en/stable/
You can check the latest development version with the command:
git clone https://github.com/rodrigo-arenas/Sklearn-genetic-opt.git
Contributions are more than welcome! There are lots of opportunities on the on going project, so please get in touch if you would like to help out. Also check the Contribution guide
After installation, you can launch the test suite from outside the source directory:
pytest sklearn_genetic