NePS helps deep learning experts to optimize the hyperparameters and/or architecture of their deep learning pipeline with:
- Hyperparameter Optimization (HPO) (example)
- Neural Architecture Search (NAS) (example, paper)
- Joint Architecture and Hyperparameter Search (JAHS) (example, paper)
For efficiency and convenience NePS allows you to
- Add your intuition as priors for the search (example HPO, example JAHS, paper)
- Utilize low fidelity (e.g., low epoch) evaluations to focus on promising configurations (example, paper)
- Trivially parallelize across machines (example, documentation)
Or all of the above for maximum efficiency!
As indicated with the v0.x.x
version number, NePS is early stage code and APIs might change in the future.
Please have a look at our documentation and examples.
Using pip
pip install neural-pipeline-search
Using neps
always follows the same pattern:
- Define a
run_pipeline
function that evaluates architectures/hyperparameters for your problem - Define a search space
pipeline_space
of architectures/hyperparameters - Call
neps.run
to optimizerun_pipeline
overpipeline_space
In code, the usage pattern can look like this:
import neps
import logging
# 1. Define a function that accepts hyperparameters and computes the validation error
def run_pipeline(hyperparameter_a: float, hyperparameter_b: int):
validation_error = -hyperparameter_a * hyperparameter_b
return validation_error
# 2. Define a search space of hyperparameters; use the same names as in run_pipeline
pipeline_space = dict(
hyperparameter_a=neps.FloatParameter(lower=0, upper=1),
hyperparameter_b=neps.IntegerParameter(lower=1, upper=100),
)
# 3. Call neps.run to optimize run_pipeline over pipeline_space
logging.basicConfig(level=logging.INFO)
neps.run(
run_pipeline=run_pipeline,
pipeline_space=pipeline_space,
root_directory="usage_example",
max_evaluations_total=5,
)
For more details and features please have a look at our documentation and examples.
See our documentation on analysing runs.
NePS does not cover your use-case? Have a look at some alternatives.
Please see the documentation for contributors.