Improvements in `OpSum` to `TTN` conversion
Opened this issue · 2 comments
Followup to #116:
- Replace
MatElem
andQNArrElem
with FillArrays.OneElement. - Rename
determine_val_type
tocoefficient_type
. - Default
OpSum
coefficient type toFloat64
, require users to specifyOpSum{ComplexF64}
if they want that. - Check / Improve compatibility with feature set of
OpSum
toMPO
conversion in ITensors: support multi-site operators, ensure sorting comparisons work and are implemented consistently with ITensors implementation, implement all relevant sorting w.r.t to traversal order of tree instead of site-labels to ensure compatibility with arbitraryvertextype
. - Copy
ITensors
functions being used inttn_svd
likeITensors.determineValType
,ITensors.posInLink!
,ITensors.MatElem
, etc. toITensorNetworks.jl
and update their style. Functions likeITensors.which_op
,ITensors.params
,ITensors.site
,ITensors.argument
, etc. that come from theOps
module related toOpSum
shouldn't be copied over. - Split off logic for building symbolic representation of TTNO into a separate function.
- Move
calc_qn
outside ofttn_svd
. - Use sparse matrix/array data structures or metagraphs for symbolic representation of TTNO (for example NDTensors.SparseArrayDOKs may be useful for that).
- Split off logic of grouping terms by QNs.
- Factor out logic for building link indices, make use of
IndsNetwork
. - Refactor code logic to first work without merged blocks/QNs and then optionally merge and compress as needed.
- Support other compression schemes, like rank-revealing sparse QR.
- Implement sequential compression as opposed to the current method which uses parallel compression (i.e. right now it compresses each link index effectively independently) to improve performance.
- Allow compression to take into account operator information (perhaps by preprocessing by expanding in an orthonormal operator basis), not just coefficients.
- Handle starting and ending blocks in a more elegant way, for example as part of a sparse matrix.
- Handle vertices without any site indices (internal vertices, such as for hierarchical TTN).
- Make sure the fermion signs of the tensors being constructed are correct and work with with automatic fermion sign system.
A comment on the representation of the symbolic TTN object:
Seems like this data structure could be a
DataGraph
with a graph structure matching theIndsNetwork
/TTN
graph structure and aSparseArrayDOK
stored on the vertices, where the number of dimensions is the degree of the graph and the elements areScaled{coefficient_type,Prod{Op}}
. Does that sound right to you?I suppose one thing that needs to be stored is the meaning of each dimension of the
SparseArrayDOK
on the vertices since you want to know which dimension corresponds to which neighbor. So interestingly the best representation may be anITensor
, or maybe a NamedDimsArray wrapping aSparseArrayDOK
, where the dimension names are the edges of the graph.
Originally posted by @mtfishman in #166 (comment)
Regarding the data structure used in the svd_bond_coefs(...)
function:
This could be a
DataGraph
with that data on the edges of the graph.I also wonder if
Dict{QN,Matrix{coefficient_type}}
could be a block diagonalBlockSparseMatrix
where those matrices are the diagonal blocks and the QNs are the sector labels of the graded axes.
Originally posted by @mtfishman in #166 (comment)