This package consists of a small extension library of highly optimized sparse update (scatter) operations for the use in PyTorch, which are missing in the main package. Scatter operations can be roughly described as reduce operations based on a given "group-index" tensor. The package consists of the following operations:
- Scatter Add
- Scatter Sub
- Scatter Mul
- Scatter Div
- Scatter Mean
- Scatter Std
- Scatter Min
- Scatter Max
- Scatter LogSumExp
In addition, we provide composite functions which make use of scatter_*
operations under the hood:
All included operations are broadcastable, work on varying data types, and are implemented both for CPU and GPU with corresponding backward implementations.
Ensure that at least PyTorch 1.1.0 is installed and verify that cuda/bin
and cuda/include
are in your $PATH
and $CPATH
respectively, e.g.:
$ python -c "import torch; print(torch.__version__)"
>>> 1.1.0
$ echo $PATH
>>> /usr/local/cuda/bin:...
$ echo $CPATH
>>> /usr/local/cuda/include:...
Then run:
pip install torch-scatter
If you are running into any installation problems, please create an issue.
Be sure to import torch
first before using this package to resolve symbols the dynamic linker must see.
import torch
from torch_scatter import scatter_max
src = torch.tensor([[2, 0, 1, 4, 3], [0, 2, 1, 3, 4]])
index = torch.tensor([[4, 5, 4, 2, 3], [0, 0, 2, 2, 1]])
out, argmax = scatter_max(src, index, fill_value=0)
print(out)
tensor([[ 0, 0, 4, 3, 2, 0],
[ 2, 4, 3, 0, 0, 0]])
print(argmax)
tensor([[-1, -1, 3, 4, 0, 1]
[ 1, 4, 3, -1, -1, -1]])
python setup.py test