/benchOpt

Making your benchmark of optimization algorithms simple and open

Primary LanguagePythonOtherNOASSERTION

Benchmark repository for optimization

Build Status Python 3.6+ codecov

BenchOpt is a package to simplify, make more transparent and more reproducible the comparisons of optimization algorithms.

BenchOpt is written in Python but it is available with many programming languages. So far it has been tested with Python, R, Julia and compiled binaries written in C/C++ available via a terminal command. If it can be installed via conda it should just work!

BenchOpt is used through a command line as described in the API Documentation. Ultimately running and replicating an optimization benchmark should be as simple as doing:

$ git clone https://github.com/benchopt/benchmark_logreg_l2
$ benchopt run --env ./benchmark_logreg_l2

Running this command will give you a benchmark plot on l2-regularized logistic regression:

https://benchopt.github.io/_images/sphx_glr_plot_run_benchmark_001.png

To discover which benchmarks are presently available look for benchmark_* repositories on GitHub, such as for l1-regularized logistic regression.

Learn how to write a benchmark on our documentation.

Install

This package can be installed through pip using:

$ pip install benchopt

This will install the command line tool to run the benchmark. Then, existing benchmarks can be retrieved from git or created locally. For instance, the benchmark for Lasso can be retrieved with:

$ git clone https://github.com/benchopt/benchmark_lasso

Command line usage

To run the Lasso benchmark on all datasets and with all solvers, run:

$ benchopt run --env ./benchmark_lasso

Use

$ benchopt run -h

for more details about different options or read the API Documentation.

List of optimization problems available

  • ols: ordinary least-squares.
  • nnls: non-negative least-squares.
  • lasso: l1-regularized least-squares.
  • logreg_l2: l2-regularized logistic regression.
  • logreg_l1: l1-regularized logistic regression.