High-performance numerical integration on the GPU with PyTorch
Explore the docs »
View Example notebook
·
Report Bug
·
Request Feature
Table of Contents
The torchquad module allows utilizing GPUs for efficient numerical integration with PyTorch. The software is free to use and is designed for the machine learning community and research groups focusing on topics requiring high-dimensional integration.
This project is built with the following packages:
- PyTorch, which means it is fully differentiable and can be used for machine learning, and
- conda, which will take care of all requirements for you.
If torchquad proves useful to you, please consider citing the accompanying paper.
- Supporting science: Multidimensional numerical integration is needed in many fields, such as physics (from particle physics to astrophysics), in applied finance, in medical statistics, and others. torchquad aims to assist research groups in such fields, as well as the general machine learning community.
- Withstanding the curse of dimensionality: The curse of dimensionality makes deterministic methods in particular, but also stochastic ones, computationally expensive when the dimensionality increases. However, many integration methods are embarrassingly parallel, which means they can strongly benefit from GPU parallelization. The curse of dimensionality still applies but the improved scaling alleviates the computational impact.
- Enabling full differentiability: In line with recent trends (e.g. JAX torchquad builds on PyTorch in such a way that it is fully differentiable. This enables a broad range of optimization and machine learning applications.
This is a brief guide for how to set up torchquad.
We recommend using conda, especially if you want to utilize the GPU. It will automatically set up CUDA and the cudatoolkit for you in that case. Note that torchquad also works on the CPU; however, it is optimized for GPU usage. Currently torchquad only supports NVIDIA cards with CUDA. We are investigating future support for AMD cards through ROCm.
For a detailed list of required packages, please refer to the conda environment file.
The easiest way to install torchquad is simply to
conda install torchquad -c conda-forge -c pytorch
Note that since PyTorch is not yet on conda-forge for Windows, we have explicitly included it here using -c pytorch
.
Alternatively, it is also possible to use
pip install torchquad
NB Note that pip will not set up PyTorch with CUDA and GPU support. Therefore, we recommend to use conda.
After installing torchquad
through conda
or pip
, users can test its correct installation with:
import torchquad
torchquad._deployment_test()
After cloning the repository, developers can check the functionality of torchquad
by running the following command in the torchquad/tests
directory:
pytest
GPU Utilization
With conda you can install the GPU version of PyTorch with conda install pytorch cudatoolkit -c pytorch
.
For alternative installation procedures please refer to the PyTorch Documentation.
This is a brief example how torchquad can be used to compute a simple integral. For a more thorough introduction please refer to the tutorial section in the documentation.
The full documentation can be found on readthedocs.
# To avoid copying things to GPU memory,
# ideally allocate everything in torch on the GPU
# and avoid non-torch function calls
import torch
from torchquad import MonteCarlo, enable_cuda
# Enable GPU support if available
enable_cuda()
# The function we want to integrate, in this example f(x0,x1) = sin(x0) + e^x1 for x0=[0,1] and x1=[-1,1]
# Note that the function needs to support multiple evaluations at once (first dimension of x here)
# Expected result here is ~3.2698
def some_function(x):
return torch.sin(x[:,0]) + torch.exp(x[:,1])
# Declare an integrator, here we use the simple, stochastic Monte Carlo integration method
mc = MonteCarlo()
# Compute the function integral by sampling 10000 points over domain
integral_value = mc.integrate(some_function,dim=2,N=10000,integration_domain = [[0,1],[-1,1]])
You can find all available integrators here.
See the open issues for a list of proposed features (and known issues).
Using GPUs torchquad scales particularly well with integration methods that offer easy parallelization. For example, below you see error and runtime results for integrating the function f(x,y,z) = sin(x * (y+1)²) * (z+1)
on a consumer-grade desktop PC.
Runtime results of the integration. Note the far superior scaling on the GPU (solid line) in comparison to the CPU (dashed and dotted) for both methods.
Convergence results of the integration. Note that Simpson quickly reaches floating point precision. Monte Carlo is not competitive here given the low dimensionality of the problem.
The project is open to community contributions. Feel free to open an issue or write us an email if you would like to discuss a problem or idea first.
If you want to contribute, please
- Fork the project on GitHub.
- Get the most up-to-date code by following this quick guide for installing torchquad from source:
- Get miniconda or similar
- Clone the repo
git clone https://github.com/esa/torchquad.git
- Setup the environment. This will create a conda environment called
torchquad
conda env create -f environment.yml conda activate torchquad
Once the installation is done, then you are ready to contribute.
Please note that PRs should be created from and into the develop
branch. For each release the develop branch is merged into main.
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request on the
develop
branch, notmain
(NB: We autoformat every PR with black. Our GitHub actions may create additional commits on your PR for that reason.)
and we will have a look at your contribution as soon as we can.
Furthermore, please make sure that your PR passes all automated tests. Review will only happen after that.
Only PRs created on the develop
branch with all tests passing will be considered. The only exception to this rule is if you want to update the documentation in relation to the current release on conda / pip. In that case you may ask to merge directly into main
.
Distributed under the GPL-3.0 License. See LICENSE for more information.
- Q:
Error enabling CUDA. cuda.is_available() returned False. CPU will be used.
A: This error indicates that no CUDA-compatible GPU could be found. Either you have no compatible GPU or the necessary CUDA requirements are missing. Usingconda
, you can install them withconda install cudatoolkit
. For more detailed installation instructions, please refer to the PyTorch documentation.
Created by ESA's Advanced Concepts Team
- Pablo Gómez -
pablo.gomez at esa.int
- Gabriele Meoni -
gabriele.meoni at esa.int
- Håvard Hem Toftevaag -
havard.hem.toftevaag at esa.int
Project Link: https://github.com/esa/torchquad