Website | Docs | Install Guide | Tutorial
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
- 2020-08-07 We are welcoming contributions and are working on streamlining the experience. Read more about it in the blog
Optuna has modern functionalities as follows:
- Lightweight, versatile, and platform agnostic architecture
- Handle a wide variety of tasks with a simple installation that has few requirements.
- Pythonic search spaces
- Define search spaces using familiar Python syntax including conditionals and loops.
- Efficient optimization algorithms
- Adopt state-of-the-art algorithms for sampling hyper parameters and efficiently pruning unpromising trials.
- Easy parallelization
- Scale studies to tens or hundreds or workers with little or no changes to the code.
- Quick visualization
- Inspect optimization histories from a variety of plotting functions.
We use the terms study and trial as follows:
- Study: optimization based on an objective function
- Trial: a single execution of the objective function
Please refer to sample code below. The goal of a study is to find out the optimal set of
hyperparameter values (e.g., classifier
and svm_c
) through multiple trials (e.g.,
n_trials=100
). Optuna is a framework designed for the automation and the acceleration of the
optimization studies.
import ...
# Define an objective function to be minimized.
def objective(trial):
# Invoke suggest methods of a Trial object to generate hyperparameters.
regressor_name = trial.suggest_categorical('classifier', ['SVR', 'RandomForest'])
if regressor_name == 'SVR':
svr_c = trial.suggest_loguniform('svr_c', 1e-10, 1e10)
regressor_obj = sklearn.svm.SVR(C=svr_c)
else:
rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32)
regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)
X, y = sklearn.datasets.load_boston(return_X_y=True)
X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)
regressor_obj.fit(X_train, y_train)
y_pred = regressor_obj.predict(X_val)
error = sklearn.metrics.mean_squared_error(y_val, y_pred)
return error # An objective value linked with the Trial object.
study = optuna.create_study() # Create a new study.
study.optimize(objective, n_trials=100) # Invoke optimization of the objective function.
Integrations modules, which allow pruning, or early stopping, of unpromising trials are available for the following libraries:
- XGBoost
- LightGBM
- Chainer
- Keras
- TensorFlow
- tf.keras
- MXNet
- PyTorch Ignite
- PyTorch Lightning
- FastAI
- AllenNLP
Optuna is available at the Python Package Index and on Anaconda Cloud.
# PyPI
$ pip install optuna
# Anaconda Cloud
$ conda install -c conda-forge optuna
Optuna supports Python 3.5 or newer.
Also, we also provide Optuna docker images on DockerHub.
- GitHub Issues for bug reports, feature requests and questions.
- Gitter for interactive chat with developers.
- Stack Overflow for questions.
Any contributions to Optuna are more than welcome!
If you are new to Optuna, please check the good first issues. They are relatively simple, well-defined and are often good starting points for you to get familiar with the contribution workflow and other developers.
If you already have contributed to Optuna, we recommend the other contribution-welcome issues.
For general guidelines how to contribute to the project, take a look at CONTRIBUTING.md.
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD (arXiv).