rusty1s/pytorch_cluster

ARM64 torch_cluster.radius bug

JSSchmidt opened this issue · 1 comments

When executing torch_cluster.radius on my MacBook Pro 16-inch(Nov 2023, M3 Pro) even a minimal working example gives an error message. Problem does NOT persist on Server Ubuntu 22.04.4.

Minimum working example:

`from torch_cluster import radius
import torch

radius(torch.tensor([[0, 0], [1, 0], [0, 1], [1, 1]]), torch.tensor([[0, 0], [1, 0], [0, 1], [1, 1]]), r=1.5)`

`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 4
1 from torch_cluster import radius
2 import torch
----> 4 radius(torch.tensor([[0, 0], [1, 0], [0, 1], [1, 1]]), torch.tensor([[0, 0], [1, 0], [0, 1], [1, 1]]), r=1.5)

File ~/micromamba/envs/mldft/lib/python3.11/site-packages/torch_cluster/radius.py:82, in radius(x, y, r, batch_x, batch_y, max_num_neighbors, num_workers, batch_size)
79 ptr_x = torch.bucketize(arange, batch_x)
80 ptr_y = torch.bucketize(arange, batch_y)
---> 82 return torch.ops.torch_cluster.radius(x, y, ptr_x, ptr_y, r,
83 max_num_neighbors, num_workers)

File ~/micromamba/envs/mldft/lib/python3.11/site-packages/torch/ops.py:854, in OpOverloadPacket.call(self, *args, **kwargs)
846 def call(self_, *args, **kwargs): # noqa: B902
847 # use self_ to avoid naming collide with aten ops arguments that
848 # named "self". This way, all the aten ops can be called by kwargs.
(...)
852 # We save the function ptr as the op attribute on
853 # OpOverloadPacket to access it here.
--> 854 return self_._op(*args, **(kwargs or {}))

RuntimeError: x.dim() == 2 INTERNAL ASSERT FAILED at "csrc/cpu/radius_cpu.cpp":13, please report a bug to PyTorch. Input mismatch`

I'm having the same issue. Any updates on that?