Switch from Global `device`-`config` to `tensor`-wise configuration
M-Lampert opened this issue · 0 comments
M-Lampert commented
As we now have almost all of the core functions implemented in torch
operations that can utilize the GPU, we have fixed most runtime issues and now run into the next bottleneck namely memory (GPU-RAM). It thus might become necessary to give the user more control over what parts of a Graph
object should be stored on CPU or GPU. Although it is more convenient if this is controlled via a global configuration, it might become necessary to add to(device)
methods to enable batch-wise computations.