/torchkit

Research boilerplate for PyTorch.

Primary LanguagePythonMIT LicenseMIT

torchkit

documentation build license

torchkit is a lightweight library containing PyTorch utilities useful for day-to-day research. Its main goal is to abstract away a lot of the redundant boilerplate associated with research projects like experimental configurations, logging and model checkpointing. It consists of:

torchkit.Logger A wrapper around Tensorboard's SummaryWriter for safe logging of scalars, images, videos and learning rates. Supports both numpy arrays and torch Tensors.
torchkit.CheckpointManager A port of Tensorflow's checkpoint manager that automatically manages multiple checkpoints in an experimental run.
torchkit.experiment A collection of methods for setting up experiment directories.
torchkit.layers A set of commonly used layers in research papers not available in vanilla PyTorch like "same" and "causal" convolution and SpatialSoftArgmax.
torchkit.losses Some useful loss functions also unavailable in vanilla PyTorch like cross entropy with label smoothing and Huber loss.
torchkit.utils A bunch of helper functions for config manipulation, I/O, timing, debugging, etc.

For more details about each module, see the documentation.

Installation

To install the latest release, run:

pip install git+https://github.com/kevinzakka/torchkit.git

Contributing

For development, clone the source code and create a virtual environment for this project:

git clone https://github.com/kevinzakka/torchkit.git
cd torchkit
pip install -e .[dev]

Acknowledgments

  • Thanks to Karan Desai's VirTex which I used to figure out documentation-related setup for torchkit and for just being an excellent example of stellar open-source research release.
  • Thanks to seals for the excellent software development practices that I've tried to emulate in this repo.
  • Thanks to Brent Yi for encouraging me to use type hinting and for letting me use his awesome Bayesian filtering library's README as a template.