tum-pbs/PhiFlow

Scaling PhiFlow across multiple GPUs

joyjitkundu032 opened this issue · 1 comments

Is there anyway to scale PhiFlow across multiple GPUs?

holl- commented

Multi-GPU is not officially supported as of yet. Here is what you can do:

You can list all available GPUs using backend.default_backend().list_devices('GPU'). Then you can set one as the default devices using backend.default_backend().set_default_device(). All tensor initializers will now allocate on this GPU.

You can use one of the native backend functions, such as Jax's pmap to parallelize your function. This currently requires you to pass only native tensors to the function.

Multi-GPU support may be added in the future but it's not a priority for us right now. Contributions are welcome!