/variorum

Vendor-neutral library for exposing power and performance features across diverse architectures

Primary LanguageC++MIT LicenseMIT

VARIORUM

Build Status Read the Docs

Welcome to Variorum, a platform-agnostic library exposing monitor and control interfaces for several features in hardware architectures.

version 0.8.0

Last Update

26 March 2024

Webpages

https://variorum.readthedocs.io

Overview

Variorum is an extensible vendor-neutral library for Linux that exposes power and performance monitoring and control of low-level hardware dials.

Documentation

To get started building and using Variorum, check out the full documentation here:

https://variorum.readthedocs.io

Getting Started

Installation is simple. You will need CMAKE version 2.8 or higher and GCC. Variorum does not support in-source builds. In most cases, the installation is as follows:

    $ cd variorum/
    $ mkdir build && mkdir install
    $ cd build/
    $ cmake -DCMAKE_INSTALL_PREFIX=../install ../src
    $ make
    $ make install

Note that Variorum depends on hwloc and JANSSON. The build system will first check for a specified local install of these dependencies, then it will check if it has been pre-installed. If it can find neither, it will clone and build the dependency from source (will fail on machines without internet access).

CMake Host Config Files

We provide configuration files for specific systems to pre-populate the cache. This configuration file will define various compilers, and paths to hwloc installs. These can be used as follows:

    $ cd variorum/
    $ mkdir build && mkdir install
    $ cd build/
    $ cmake -C ../host-configs/your-local-configuration-file.cmake ../src
    $ make

Platform and Microarchitecture Support

Variorum has support for an increasing number of platforms and microarchitectures:

Platforms supported: AMD, ARM, IBM, Intel, NVIDIA

If you are unsure of your architecture number, check the "model" field in lscpu or /proc/cpuinfo (note that it will not be in hexadecimal).

Supported AMD Microrchitectures:

Family 19h, Models 0~Fh and 30h ~ 3Fh (EPYC Milan and Genoa)

Supported AMD GPUs:

AMD Radeon Instinct, all models from MI50 onwards.

Supported ARM Microrchitectures:

Juno r2
Neoverse N1

Supported IBM Microrchitectures:

Power9

Supported Intel Microarchitectures:

0x2A (Sandy Bridge)
0x2D (Sandy Bridge)
0x3E (Ivy Bridge)
0x3E (Haswell)
0x4F (Broadwell)
0x9E (Kaby Lake)
0x55 (Skylake, Cascade Lake, Cooper Lake)
0x6A (Ice Lake)
0x8F (Sapphire Rapids)

Supported Intel GPUs:

Intel Arctic Sound, Intel Ponte Vecchio

Supported Nvidia Microrchitectures:

Volta
Ampere 

Testing

From within the build directory, unit tests can be executed as follows:

    $ make test

Please report any failed tests to the project maintainers.

Examples

For sample code, see the examples/ directory.

System Environment Requirements

This software has certain system requirements depending on what hardware you are running on.

AMD: Running this software on AMD platforms depends on the AMD Energy Driver, AMD HSMP driver, and AMD E-SMI library being available with the correct permissions.

AMD GPU: Running this software on AMD GPU platforms depends on the ROCm-SMI library.

ARM: Running this software on ARM platforms depends on the Linux Hardware Monitoring (hwmon) subsystem for access to the monitoring and control interfaces.

IBM: Running this software on IBM platforms depends on the OPAL files being present with R/W permissions.

Intel: Running this software on Intel platforms depends on the files /dev/cpu/*/msr being present with R/W permissions. Recent kernels require additional capabilities. We have found it easier to use our own msr-safe kernel module, but running as root (or going through the bother of adding the capabilities to the binaries) is another option.

Nvidia: Running this software on Nvidia platforms depends on CUDA being loaded.

Bug Tracker

Bugs and feature requests are being tracked on GitHub Issues.

Contributing

We welcome all kinds of contributions: new features, bug fixes, documentation, edits, etc.!

To contribute, make a pull request, with dev as the destination branch. We use GitHub Actions to run CI tests, and your branch must pass these tests before being merged.

See the CONTRIBUTING for more information.

Authors

Stephanie Brink, brink2@llnl.gov
Aniruddha Marathe, marathe1@llnl.gov
Tapasya Patki, patki1@llnl.gov
Barry Rountree, rountree@llnl.gov

Please feel free to contact the developers with any questions or feedback.

We collect names of those who have contributed to Variorum over the years. See the current list in Contributors.

License

Variorum is released under the MIT license. For more details, see the LICENSE and NOTICE files.

SPDX-License-Identifier: MIT

LLNL-CODE-789253

Acknowledgments

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy's Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation's exascale computing imperative.