data-apis/array-api-strict

Add virtual devices to make it easier for array API consumer to check that they use device correctly

ogrisel opened this issue · 3 comments

Motivation: in scikit-learn, we run array API compliance tests both with array_api_strict and array_api_compat.torch. For the latter we run tests both with cuda, mps and cpu devices.

Testing with torch is important because it helps us reveal occurrences of problems related to device handling. For example let's consider the following problematic function:

def sublinear_mask(data):
    return data <= xp.linspace(data[0], data[-1], num=data.shape[0])

Calling this with:

sublinear_mask(xp.asarray([0, 1, 2, 2, 5, 5], device="mps"))

raises a RuntimeError because PyTorch does not implicitly move data across devices. Hence the bug should be fixed by changing the function to:

def sublinear_mask(data):
    return data <= xp.linspace(data[0], data[-1], num=data.shape[0], device=data.device)

However, not all scikit-learn contributors have access to a machine with a non-cpu device (e.g. "mps" or "cuda") and therefore they have no easy way to detect this family of bugs by running their tests on their local laptop and only discover issues on the CI and need to use tools such as google colab to debug and refine their code instead of their regular dev environment.

To reduce this friction, it would be nice if array-api-strict could accept to create arrays with a = xp.asarray([1, 2, 3], device="virtual_device_a") and b = xp.asarray([1, 2, 3], device="virtual_device_b") and would raise RuntimeError on operations that combine arrays from different devices as pytorch does.

This is a good idea. My original idea was to use cupy as a backend #5, but that requires you to have access to a CUDA GPU.

This is a good idea. My original idea was to use cupy as a backend #5, but that requires you to have access to a CUDA GPU.

Indeed, I saw #5 and was wondering if it was still considered valid. I am not about the value and maintenance complexity of having a multi-backend array-api-strict.

As you said, the fact that cupy requires a working CUDA is restricting the pool of people who could run it on their day-to-day developer environment.

Well this idea definitely achieves the original purpose of having a CuPy backend in a much simpler and more general way. I'm not sure if there are any GPU-specific idiosyncrasies that we might want to support which would be difficult to emulate without actually using a library like CuPy.