Investigating torch.linalg float64 operations
Closed this issue · 1 comments
pomonam commented
Previously, I noticed that there is a large performance gap when doing torch.linalg
operations (e.g., eigendecomposition, SVD) using float32
vs. float64
. The current codebase uses float32
(or the original dtype of the Tensor), but it might be worth exploring using higher precisions.
sangkeun00 commented
By default, we now perform torch.eigh
with float64
, and then revert the result back to the original dtype
. This change applies from the commit 20a249f .