oseledets/ttpy

dtt_svd error

Opened this issue · 4 comments

When constructing a TT matrix from an Hamiltonian operator with a very large number of degrees of freedom the norm of the matrices passed to the QR factorization routine becomes too large.

The error can be reproduced using the attached code.

tt-tfd.py.gz

Raffaele

I have found a much simpler example which shows the problem.
The problem is in the dtt_ort() routine. The first element of the U matrix keeps growing and after a large number of iterations the result is nan.
I am attaching the example.
tt-round.py.gz

After looking at the methodology and at the code I don't believe that this is an error.
When sweeping through the cores using the QR decomposition the R factor keeps growing and there is no simple way to avoid it. In the example I have taken K tensor products of identity matrices E each of size N. The R factor of E reshaped as a (N*N) vector is just sqrt(N). Thus after sweeping through K cores it becomes N^(K/2). Of course when K is very large this gives numerical problems. In my case after K=882 it is NaN. The code keeps on running without detecting the NaN (but this can be compiler dependent).
As far as I can see it doesn't seem like a bug. Am I right?

@rborrelli it is possible to get it fixed by making not QR decompositions, by
a decomposition where the columns of Q are orthogonal (not orthonormal), and R has small norm.
I.e., you can write 10^100 * 1, or you can write it as 10 x 10 x 10 x ... x 10. In the latter case, there will be no overflow.

The same problem holds in the SVD procedure. In this case the truncated singular value matrices of each core are multiplied K times running into overflow for very large K. Of course the trick can also be applied in this case. I was wondering, however, if in the truncated SVD procedure the singular values of core J could be used to defined the new truncated core J, instead of begin multiplied to the core J-1.