The first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search (JAHS), built to also support and facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms.
Please see our documentation here.
Using pip
pip install git+https://github.com/automl/jahs_bench_201.git
Optionally, you can download the data required to use the surrogate benchmark ahead of time with
python -m jahs_bench.download --target surrogates
To test if the installation was successful, you can, e.g, run a minimal example with
python -m jahs_bench_examples.minimal
This should randomly sample a configuration, and display both the sampled configuration and the result of querying the surrogate for that configuration.
Configurations in our Joint Architecture and Hyperparameter (JAHS) space are represented as dictionaries, e.g.,:
config = {
'Optimizer': 'SGD',
'LearningRate': 0.1,
'WeightDecay': 5e-05,
'Activation': 'Mish',
'TrivialAugment': False,
'Op1': 4,
'Op2': 1,
'Op3': 2,
'Op4': 0,
'Op5': 2,
'Op6': 1,
'N': 5,
'W': 16,
'Resolution': 1.0,
}
For a full description on the search space and configurations see our documentation.
import jahs_bench
benchmark = jahs_bench.Benchmark(task="cifar10", download=True)
# Query a random configuration
config = benchmark.sample_config()
results = benchmark(config, nepochs=200)
# Display the outputs
print(f"Config: {config}") # A dict
print(f"Result: {results}") # A dict
The API of our benchmark enables users to either query a surrogate model (the default) or the tables of performance data, or train a
configuration from our search space from scratch using the same pipeline as was used by our benchmark.
However, users should note that the latter functionality requires the installation of jahs_bench_201
with the
optional data_creation
component and its relevant dependencies. The relevant data can be automatically downloaded by
our API. See our documentation for details.
We provide documentation for the performance dataset used to train our surrogate models and further information on our surrogate models.
See our experiments repository and our documentation.
We maintain leaderboards for several optimization tasks and algorithmic frameworks.