/Hyperparameter_Tuning_Techniques

Hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned. Hyperparameters are crucial as they control the overall behavior of a machine learning model. The ultimate goal is to find an optimal combination of hyperparameters that minimizes a predefined loss function to give better results.

Primary LanguageJupyter Notebook

Hyperparameter_Tuning_Techniques

All Techniques Of Hyper Parameter Optimization

1.GridSearchCV

2.RandomizedSearchCV

3.Bayesian Optimization -Automate Hyperparameter Tuning (Hyperopt)

4.Sequential Model Based Optimization(Tuning a scikit-learn estimator with skopt)

5.Optuna- Automate Hyperparameter Tuning

6.Genetic Algorithms (TPOT Classifier)

References

1.https://github.com/fmfn/BayesianOptimization

2.https://github.com/hyperopt/hyperopt

3.https://www.jeremyjordan.me/hyperparameter-tuning/

4.https://optuna.org/

5.https://towardsdatascience.com/hyperparameters-optimization-526348bb8e2d(By Pier Paolo Ippolito )

6.https://scikit-optimize.github.io/stable/auto_examples/hyperparameter-optimization.html

Kaggle discussion- https://www.kaggle.com/pavansanagapati/automated-hyperparameter-tuning