Ginkgo is a high-performance linear algebra library for manycore systems, with a focus on sparse solution of linear systems. It is implemented using modern C++ (you will need at least C++11 compliant compiler to build it), with GPU kernels implemented in CUDA.
An extensive database of up-to-date benchmark results is available in the performance data reposotiry. Visualizations of the database can be interactively generated using the Ginkgo Performance Explorer web application. The benchmark results are automatically updated using the CI system to always reflect the current state of the library.
For Ginkgo core library:
- cmake 3.1+
- C++11 compliant compiler, one of:
- gcc 5.4.0+
- clang 3.3+ (TODO: verify, works with 5.0)
The Ginkgo CUDA module has the following additional requirements:
- cmake 3.10+
- CUDA 7.0+ (TODO: verify, works with 8.0)
- Any host compiler restrictions your version of CUDA may impose also apply here. For the newest CUDA version, this information can be found in the CUDA installation guide
In addition, if you want to contribute code to Ginkgo, you will also need the following:
- clang-format 5.0.1+ (ships as part of clang)
For Ginkgo core library:
- cmake 3.1+
- C++11 compliant compiler, one of:
- gcc 5.4.0+ (TODO: verify)
- clang 3.3+ (TODO: verify)
- Apple LLVM 8.0+ (TODO: verify)
The Ginkgo CUDA module has the following additional requirements:
-
cmake 3.8+
-
CUDA 7.0+ (TODO: verify)
-
Any host compiler restrictions your version of CUDA may impose also apply here. For the newest CUDA version, this information can be found in the CUDA installation guide In addition, if you want to contribute code to Ginkgo, you will also need the following:
-
clang-format 5.0.1+ (ships as part of clang)
-
NOTE: If you want to use clang as your compiler and develop Ginkgo, you'll currently need two versions clang: clang 4.0.0 or older, as this is this version supporetd by the CUDA 9.1 toolkit, and clang 5.0.1 or newer, which will not be used for compilation, but only provide the clang-format utility
Windows is currently not supported, but we are working on porting the library there. If you are interested in helping us with this effort, feel free to contact one of the developers. (The library itself doesn't use any non-standard C++ features, so most of the effort here is in modifying the build system.)
TODO: Some restrictions will also apply on the version of C and C++ standard libraries installed on the system. We need to investigate this further.
Use the standard cmake build procedure:
mkdir build; cd build
cmake -G "Unix Makefiles" [OPTIONS] .. && make
Replace [OPTIONS]
with desired cmake options for your build.
Ginkgo adds the following additional switches to control what is being built:
-
-DDEVEL_TOOLS={ON, OFF}
sets up the build system for development (requires clang-format, will also download git-cmake-format), default isON
-
-DBUILD_TESTS={ON, OFF}
builds Ginkgo's tests (will download googletest), default isON
-
-DBUILD_BENCHMARKS={ON, OFF}
builds Ginkgo's benchmarks (will download gflags and rapidjson), default isON
-
-DBUILD_EXAMPLES={ON, OFF}
builds Ginkgo's examples, default isON
-
-DBUILD_REFERENCE={ON, OFF}
build reference implementations of the kernels, usefull for testing, default osOFF
-
-DBUILD_OMP={ON, OFF}
builds optimized OpenMP versions of the kernels, default isOFF
-
-DBUILD_CUDA={ON, OFF}
builds optimized cuda versions of the kernels (requires CUDA), default isOFF
-
-DBUILD_DOC={ON, OFF}
creates an HTML version of Ginkgo's documentation from inline comments in the code -
-DSET_CUDA_HOST_COMPILER={ON, OFF}
instructs the build system to explicitly set CUDA's host compiler to match the commpiler used to build the the rest of the library (otherwise the nvcc toolchain uses its default host compiler). Setting this option may help if you're experiencing linking errors due to ABI incompatibilities. The default isOFF
. -
-DCMAKE_INSTALL_PREFIX=path
sets the installation path formake install
. The default value is usually something like/usr/local
-
-DCUDA_ARCHITECTURES=<list>
where<list>
is a semicolon (;
) separated list of architectures. Supported values are:Auto
Kepler
,Maxwell
,Pascal
,Volta
CODE
,CODE(COMPUTE)
,(COMPUTE)
Auto
will automatically detect the present CUDA-enabled GPU architectures in the system.Kepler
,Maxwell
,Pascal
andVolta
will add flags for all architectures of that particular NVIDIA GPU generation.COMPUTE
andCODE
are placeholders that should be replaced with compute and code numbers (e.g. forcompute_70
andsm_70
COMPUTE
andCODE
should be replaced with70
. Default isAuto
. For a more detailed explanation of this option see theARCHITECTURES
specification list section in the documentation of the CudaArchitectureSelector CMake module.
For example, to build everything (in debug mode), use:
mkdir build; cd build
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Debug -DDEVEL_TOOLS=ON \
-DBUILD_TESTS=ON -DBUILD_REFERENCE=ON -DBUILD_OMP=ON -DBUILD_CUDA=ON ..
make
NOTE: Currently, the only verified CMake generator is Unix Makefiles
.
Other generators may work, but are not officially supported.
You need to compile ginkgo with -DBUILD_TESTS=ON
option to be able to run the
tests. Use the following command inside the build folder to run all tests:
make test
The output should contain several lines of the form:
Start 1: path/to/test
1/13 Test #1: path/to/test ............................. Passed 0.01 sec
To run only a specific test and see more details results (e.g. if a test failed) run the following from the build folder:
./path/to/test
where path/to/test
is the path returned by make test
.
To install Ginkgo into the specified folder, execute the following command in the build folder
make install
If the installation prefix (see CMAKE_INSTALL_PREFIX
) is not writable for your
user, e.g. when installing Ginkgo system-wide, it might be necessary to prefix
the call with sudo
.
Refer to ABOUT-LICENSING.md for details.