_ __ _
(_) / _| (_)
_ _ __ | |_ ___ _ __ ___ _ __ ___ ___ ___ _ __ __ _ _ _ __ ___
| | '_ \| _/ _ \ '__/ _ \ '_ \ / __/ _ \ __ / _ \ '_ \ / _` | | '_ \ / _ \
| | | | | || __/ | | __/ | | | (_| __/ |__| | __/ | | | (_| | | | | | __/
|_|_| |_|_| \___|_| \___|_| |_|\___\___| \___|_| |_|\__, |_|_| |_|\___|
__/ |
|___/
Inference-Engine supports research in concurrent, large-batch inference and training of deep, feed-forward neural networks. Inference-Engine targets high-performance computing (HPC) applications with performance-critical inference and training needs. The initial target application is in situ training of a cloud microphysics model proxy for the Intermediate Complexity Atmospheric Research (ICAR) model. Such a proxy must support concurrent inference at every grid point at every time step of an ICAR run. For validation purposes, Inference-Engine also supports the export and import of neural networks to and from Python by the companion package nexport.
The features of Inference-Engine that make it suitable for use in HPC applications include
- Implementation in Fortran 2018.
- Exposing concurrency via
Elemental
, implicitlypure
inference procedures,- An
elemental
and implicitlypure
activation strategy, and - A
pure
training subroutine,
- Gathering network weights and biases into contiguous arrays for efficient memory access patterns, and
- User-controlled mini-batch size facilitating
in situ
training at application runtime.
Making Inference-Engine's infer
functions and train
subroutines pure
facilitates invoking those procedures inside Fortran do concurrent
constructs, which some compilers can offload automatically to graphics processing units (GPUs). The use of contiguous arrays facilitates spatial locality in memory access patterns. User control of mini-batch size facilitates in-situ training at application runtime.
The available optimizers for training neural networks are
- Stochastic gradient descent
- Adam (recommended)
With the Fortran Package Manager (fpm
) and a recent version of a Fortran compiler installed, enter one of the commmands below to build the Inference-Engine library and run the test suite:
fpm test --profile release
fpm test --compiler ifx --profile release --flag -O3
This capability is under development with the goal to facilitate automatic GPU offloading via the following command:
fpm test --compiler ifx --profile releae --flag "-fopenmp-target-do-concurrent -qopenmp -fopenmp-targets=spir64 -O3"
Building with flang-new
requires passing flags to enable the compiler's experimental support for assumed-rank entities:
fpm test --compiler flang-new --flag "-mmlir -allow-assumed-rank -O3"
A script that might help with building flang-new
from source is in the handy-dandy repository.
fpm test --compiler nagfor --flag -fpp --profile release
Support for the Cray Compiler Environment (CCE) Fortran compiler is under development.
Building with the CCE ftn
compiler wrapper requires an additional trivial wrapper
shell script. For example, create a file crayftn.sh
with the following contents and
place this file's location in your PATH
:
#!/bin/bash
ftn "$@"
Then execute
fpm test --compiler crayftn.sh
The example subdirectory contains demonstrations of several intended use cases.
To see the format for a JSON configuration file that defines the hyperparameters and a new network configuration for a training run, execute the provided training-configuration output example program:
% ./build/run-fpm.sh run --example print-training-configuration
Project is up to date
{
"hyperparameters": {
"mini-batches" : 10,
"learning rate" : 1.50000000,
"optimizer" : "adam"
}
,
"network configuration": {
"skip connections" : false,
"nodes per layer" : [2,72,2],
"activation function" : "sigmoid"
}
}
As of this writing, the JSON file format is fragile. Because an Intel ifx
compiler bug prevents using our preferred JSON interface, rojff, Inference-Engine currently uses a very restricted JSON subset written and read by the sourcery utility's string_t
type-bound procedures. For this to work, it is important to keep input files as close as possible to the exact form shown above. In particular, do not split, combine or reorder lines. Adding or removing whitespace should be ok.
Please see the Inference-Engine GitHub Pages site for HTML documentation generated by ford
.