/volkstuner

An easy to use hyperparameter tuner

Primary LanguagePythonApache License 2.0Apache-2.0

Introduction

volkstuner is an open source hyperparameter tuner.

Features

  • Deep learning framework agnostic

    Your training code can be based on PyTorch, MXNet, TensorFlow, etc.

  • Task agnostic

    You can tune hyperparameter for classification, semantic segmentation, object detection, to name a few.

  • Easy to use

    You just need modify a few configurations in your your original training code.

License

This project is released under the Apache 2.0 license.

Installation

Requirements

  • Linux
  • Python 3.7+
  • PyTorch 1.1.0 or higher
  • CUDA 9.0 or higher

We have tested the following versions of OS and softwares:

  • OS: Ubuntu 16.04.6 LTS
  • CUDA: 9.0
  • Python 3.7.3

Install volkstuner

a. Create a conda virtual environment and activate it.

conda create -n volkstuner python=3.7 -y
conda activate volkstuner

b. Install PyTorch and torchvision following the official instructions, e.g.,

conda install pytorch torchvision -c pytorch

c. Clone the volkstuner repository.

git clone https://github.com/Media-Smart/volkstuner.git
cd volkstuner
volkstuner_root=${PWD}

d. Install dependencies.

pip install -r requirements.txt

Train

a. Config

Modify some configuration accordingly in the config file like configs/torch/cifar10/baseline.py. If you don't want to use all gpus, you can new a file ~/.volkstuner/resource.yml, then specify the gpu ids like gpu: [2, 3].

b. Run

python tools/auto.py configs/torch/cifar10/baseline.py

Snapshots and logs will be generated at ${volkstuner_root}/workdir. The best hyperparameters will be stored in logs file.

Custom job

a. Write your job in jobs folder, like jobs/pytorch/cifar10/trainval.py

b. Define your configuration in configs folder, like configs/torch/cifar10/baseline.py

Contact

This repository is currently maintained by Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).

Credits

We got a lot of code from autogluon.