A library for unit scaling in PyTorch, based on the paper Unit Scaling: Out-of-the-Box Low-Precision Training.
Documentation can be found at https://graphcore-research.github.io/unit-scaling.
Note: The library is currently in its beta release. Some features have yet to be implemented and occasional bugs may be present. We're keen to help users with any problems they encounter.
To install the unit-scaling
library, run:
pip install git+https://github.com/graphcore-research/unit-scaling.git
For a demonstration of the library and an overview of how it works, see Out-of-the-Box FP8 Training (a notebook showing how to unit-scale the nanoGPT model).
For a more in-depth explanation, consult our paper Unit Scaling: Out-of-the-Box Low-Precision Training.
And for a practical introduction to using the library, see our User Guide.
For users who wish to develop using this codebase, the following setup is required:
First-time setup:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements-dev.txt # Or requirements-dev-ipu.txt for the ipu
Subsequent setup:
source .venv/bin/activate
Run pre-flight checks (or run ./dev --help
to see supported commands):
./dev
IDE recommendations:
- Python intepreter is set to
.venv/bin/python
- Format-on-save enabled
- Consider a
.env
file for settingPYTHONPATH
, for exampleecho "PYTHONPATH=$(pwd)" > .env
(note that this will be a different path if using devcontainers)
Docs development:
cd docs/
make html
then view docs/_build/html/index.html
in your browser.
Copyright (c) 2023 Graphcore Ltd. Licensed under the Apache 2.0 License.
See NOTICE.md for further details.