Jutho/TensorKit.jl

difference of @planar and @tensor for fermionic tensors?

Closed this issue · 2 comments

Dear Jutho:
I want to write my quantum chemistry DMRG based on TensorKit. I notice that using @planar and @tensor on fermionic tensors could lead to different results. Is there any documentation for the motivation of using @planar for fermionic tensor networks, or when is it necessary to using @planar for fermionic tensor networks?

Hi Gouchu,

There are some subtleties involved with working with fermionic tensors, which can be dealt with in several different ways, which are reflected by the use of the different macros. Let me first advertise MPSKit.jl, which already has a full implementation of DMRG for fermionic tensors, so might just be what you are looking for. Nevertheless, let me also try to elaborate to give an idea of what's causing the problems, and hint at how these things are overcome.

Basically, contracting fermionic tensors consists of two components. The first part is the ability to reorder the indices of a tensor, which picks up minus signs according to the well-known rule $|i>|j> = (-1)^{|i||j|}|j>|i>$. This is automatically taken care of by the fermionic tensors, by implementing precisely that rule.

The second part is where things become a little more subtle. When contracting tensors, it is natural to start from something like $\langle i | j \rangle = \delta_{ij}$, which is a canonical inner product for the vector spaces. The problem arises when you want to contract something like $|i\rangle\langle j|$, as there are now two options for handling this case.

One possibility is to also define this as equal to $\delta_{ij}$, which can be natural for many physics-related contractions such as norms, expectation values, etc. However, in order for everything to remain consistent, this has as a consequence that self-crossing lines need to introduce an additional parity matrix, which can be understood as the introduction of a permuted identity matrix where such a self-crossing exists. Thus, the network contraction algorithm needs to know where these occur, which is not uniquely defined by the macro notation. @planar finds a way out by letting the user promise that there is a planar (no crossing edges) diagram that corresponds to the network, which then uniquely determines the contraction, and thus the result.

Alternatively, you can define the inner product to include a $(-1)^{|i||j|}$, which avoids all of the issues with the self-crossing edges being non-trivial, but might not correspond to the network that you want. In particular, this results in a supertrace instead of a trace, which needs to be compensated for, for example when computing the overlap of a state with itself, for which a supertrace is unwanted. This is the context of @tensor, where the user is required to compensate for these unwanted supertraces manually by inserting twist calls into the tensors.

We are working on adding some more clarification on this topic to the documentation and writing up some manuscript for it, but this work is still in progress. In any case, I hope this might give you some idea of why these two macros exist, and what causes the results to be different.

Hi lkdvos:
Thanks a lot for you prompt reply! I roughly understand the second solution but not the first one. Looking forward to you paper and documentation for this !