A tool for building a scientific software stack from a recipe for vClusters on CSCS' Alps infrastructure.
Use the bootstrap.sh
script to install the necessary dependencies.
The dependencies are going to be installed under the external
directory on the root directory of the project.
The tool generates the make files and spack configurations that build the spack environments that are packaged together in the spack stack.
It can be thought of as equivalent to calling cmake
or configure
, before running make to run the configured build.
# configure the build
./bin/stack-config -b$BUILD_PATH -r$RECIPE_PATH
# build the spack stack
cd $BUILD_PATH
env --ignore-environment PATH=/usr/bin:/bin:`pwd`/spack/bin make modules store.squashfs -j64
# mount the stack
squashfs-run store.squashfs bash
-b, --build
: the path where the build stage-r, --recipe
: the path with the recipe yaml files that describe the environment.-d, --debug
: print detailed python error messages.
A recipe is the input provided to the tool. A recipe is comprised of the following yaml files in a directory:
config.yaml
: common configuration for the stack.compilers.yaml
: the compilers provided by the stack.environments.yaml
: environments that contain all the software packages.modules.yaml
: optional module generation rules- follows the spec for (spack mirror configuration)[https://spack.readthedocs.io/en/latest/mirrors.html]
packages.yaml
: optional package rules.- follows the spec for (spack package configuration)[https://spack.readthedocs.io/en/latest/build_settings.html]
name: nvgpu-basic
store: /user-environment
system: hohgant
spack:
repo: https://github.com/spack/spack.git
commit: 6408b51
modules: True
name
: a plain text name for the environmentstore
: the location where the environment will be mounted.system
: the name of the vCluster on which the stack will be deployed.- one of
balfrin
orhohgant
. - cluster-specific details such as the version and location of libfabric are used when configuring and building the stack.
- one of
spack
: which spack repository to use for installation.mirrors
: optional configure use of build caches, see build cache documentation.modules
: optional enable/diasble module file generation (defaultTrue
).
Take an example configuration:
bootstrap:
spec: gcc@11
gcc:
specs:
- gcc@11
llvm:
requires: gcc@11
specs:
- nvhpc@21.7
- llvm@14
The compilers are built in multiple stages:
- bootstrap: A bootstrap gcc compiler is built using the system compiler (currently gcc 4.7.5).
gcc:specs
: single spec of the formgcc@version
.- The selected version should have full support for the target architecture in order to build optimised gcc toolchains in step 2.
- gcc: The bootstrap compiler is then used to build the gcc version(s) provided by the stack.
gcc:specs
: A list of at least one of the specs of the formgcc@version
.
- llvm: (optional) The nvhpc and/or llvm toolchains are build using one of the gcc toolchains installed in step 2.
llvm:specs
: a list of specs of the formnvhpc@version
orllvm@version
.llvm:requires
: the version of gcc from step 2 that is used to build the llvm compilers.
The first two steps are required, so that the simplest stack will provide at least one version of gcc compiled for the target architecture.
Note
Don't provide full specs, because the tool will insert "opinionated" specs for the target node type, for example:
nvhpc@21.7
generatesnvhpc@21.7 ~mpi~blas~lapack
llvm@14
generatesllvm@14 +clang targets=x86 ~gold ^ninja@kitware
gcc@11
generatesgcc@11 build_type=Release +profiled +strip
The software packages are configured as disjoint environments, each built with the same compiler, and configured with a single implementation of MPI.
# environments.yaml
gcc-host:
compiler:
- toolchain: gcc
spec: gcc@11.3
unify: true
specs:
- hdf5 +mpi
- fftw +mpi
mpi:
spec: cray-mpich
gpu: false
An environment labelled gcc-host
is built using gcc@11.3
from the gcc
compiler toolchain (note the compiler spec must mach a compiler from the toolchain that was installed via the compilers.yaml
file).
The tool will generate a spack.yaml
specification:
# spack.yaml
spack:
include:
- compilers.yaml
- config.yaml
view: false
concretizer:
unify: True
specs:
- fftw +mpi
- hdf5 +mpi
- cray-mpich
packages:
all:
compiler: [gcc@11.3]
mpi:
require: cray-mpich
Note
The
cray-mpich
spec is added to the list of package specs automatically. By settingenvironments.ENV.mpi
all packages in the environmentENV
that use the virtual dependency+mpi
will use the samecray-mpich
implementation.
# environments.yaml
gcc-nvgpu:
compiler:
- toolchain: gcc
spec: gcc@11.3
unify: true
specs:
- cuda@11.8
- fftw +mpi
- hdf5 +mpi
mpi:
spec: cray-mpich
gpu: cuda
The environments:gcc-nvgpu:gpu
to cuda
will build the cray-mpich
with support for GPU-direct.
# spack.yaml
spack:
include:
- compilers.yaml
- config.yaml
view: false
concretizer:
unify: True
specs:
- cuda@11.8
- fftw +mpi
- hdf5 +mpi
- cray-mpich +cuda
packages:
all:
compiler: [gcc@11.3]
mpi:
require: cray-mpich
To build a toolchain with NVIDIA HPC SDK, we provide two compiler toolchains:
- The
llvm:nvhpc
compiler; - A version of gcc from the
gcc
toolchain, in order to build dependencies (like CMake) that can't be built with nvhpc. If a second compiler is not provided, Spack will fall back to the system gcc 4.7.5, and not generate zen2/zen3 optimized code as a result.
# environments.yaml
prgenv-nvidia:
compiler:
- toolchain: llvm
spec: nvhpc
- toolchain: gcc
spec: gcc@11.3
unify: true
specs:
- cuda@11.8
- fftw%nvhpc +mpi
- hdf5%nvhpc +mpi
mpi:
spec: cray-mpich
gpu: cuda
The following spack.yaml
is generated:
# spack.yaml
spack:
include:
- compilers.yaml
- config.yaml
view: false
concretizer:
unify: True
specs:
- cuda@11.8
- fftw%nvhpc +mpi
- hdf5%nvhpc +mpi
- cray-mpich +cuda
packages:
all:
compiler: [nvhpc, gcc@11.3]
mpi:
require: cray-mpich
# environments.yaml
tools:
compiler:
toolchain: gcc
spec: gcc@11.3
unify: true
specs:
- cmake
- python@3.10
- tmux
- reframe
mpi: false
gpu: false
# spack.yaml
spack:
include:
- compilers.yaml
- config.yaml
view: false
concretizer:
unify: True
specs:
- cmake
- python@3.10
- tmux
- reframe
packages:
all:
compiler: [gcc@11.3]
New Spack packages or custom versions of a package can be added to the alps
repo. If a repo/
folder is provided, stackinator
will copy all the Spack packages in repo/packages/
into the alps
repo (the same repo providing cray-mpich
). If the user provides a repo.yaml
file in the repo/
folder, the file will be ignored (and a warning is emitted).
Modules are generated for the installed compilers and packages by spack. The default module generation rules set by the version of spack specified in config.yaml
will be used if no modules.yaml
file is provided.
To set rules for module generation, provide a module.yaml
file as per the spack documentation.
To disable module generation, set the field config:modules:False
in config.yaml
.
A spack packages.yaml
file is provided by the tool for each target cluster. This file sets system dependencies, such as libfabric and slurm, which are expected to be provided by the cluster and not built by Spack. A recipe can provide a packages.yaml
file, which is merged with the cluster-specific packages.yaml
.
For example, to enforce every compiler and environment built use the versions of perl and git installed on the system, add a file like the following (with appropriate version numbers and prefixes, of course):
# packages.yaml
packages:
perl:
buildable: false
externals:
- spec: perl@5.36.0
prefix: /usr
git:
buildable: false
externals:
- spec: git@2.39.1
prefix: /usr