A tensor network wrapper for TensorFlow, JAX, PyTorch, and Numpy.
For an overview of tensor networks please see the following:
More information can be found in our TensorNetwork papers:
-
TensorNetwork on TensorFlow: A Spin Chain Application Using Tree Tensor Networks
-
TensorNetwork on TensorFlow: Entanglement Renormalization for quantum critical lattice models
pip3 install tensornetwork
For details about the TensorNetwork API, see the reference documentation.
Tensor Networks inside Neural Networks using Keras
Here, we build a simple 2 node contraction.
import numpy as np
import tensornetwork as tn
# Create the nodes
a = tn.Node(np.ones((10,)))
b = tn.Node(np.ones((10,)))
edge = a[0] ^ b[0] # Equal to tn.connect(a[0], b[0])
final_node = tn.contract(edge)
print(final_node.tensor) # Should print 10.0
Usually, it is more computationally effective to flatten parallel edges before contracting them in order to avoid trace edges.
We have contract_between
and contract_parallel
that do this automatically for your convenience.
# Contract all of the edges between a and b
# and create a new node `c`.
c = tn.contract_between(a, b)
# This is the same as above, but much shorter.
c = a @ b
# Contract all of edges that are parallel to edge
# (parallel means connected to the same nodes).
c = tn.contract_parallel(edge)
You can split a node by doing a singular value decomposition.
# This will return two nodes and a tensor of the truncation error.
# The two nodes are the unitary matrices multiplied by the square root of the
# singular values.
# The `left_edges` are the edges that will end up on the `u_s` node, and `right_edges`
# will be on the `vh_s` node.
u_s, vh_s, trun_error = tn.split_node(node, left_edges, right_edges)
# If you want the singular values in it's own node, you can use `split_node_full_svd`.
u, s, vh, trun_error = tn.split_node_full_svd(node, left_edges, right_edges)
You can optionally name your nodes/edges. This can be useful for debugging, as all error messages will print the name of the broken edge/node.
node = tn.Node(np.eye(2), name="Identity Matrix")
print("Name of node: {}".format(node.name))
edge = tn.connect(node[0], node[1], name="Trace Edge")
print("Name of the edge: {}".format(edge.name))
# Adding name to a contraction will add the name to the new edge created.
final_result = tn.contract(edge, name="Trace Of Identity")
print("Name of new node after contraction: {}".format(final_result.name))
To make remembering what an axis does easier, you can optionally name a node's axes.
a = tn.Node(np.zeros((2, 2)), axis_names=["alpha", "beta"])
edge = a["beta"] ^ a["alpha"]
To assert that your result's axes are in the correct order, you can reorder a node at any time during computation.
a = tn.Node(np.zeros((1, 2, 3)))
e1 = a[0]
e2 = a[1]
e3 = a[2]
a.reorder_edges([e3, e1, e2])
# If you already know the axis values, you can equivalently do
# a.reorder_axes([2, 0, 1])
print(a.tensor.shape) # Should print (3, 1, 2)
For a more compact specification of a tensor network and its contraction, there is ncon()
. For example:
from tensornetwork import ncon
a = np.ones((2, 2))
b = np.ones((2, 2))
c = ncon([a, b], [(-1, 1), (1, -2)])
print(c)
It is also possible to generate a set of nodes that represent the given tensor network.
from tensornetwork import ncon_network
a = np.ones((2, 2))
b = np.ones((2, 2))
nodes, e_con, e_out = ncon_network([a, b], [(-1, 1), (1, -2)])
for e in e_con:
n = tn.contract(e) # Contract edges in order
n.reorder_edges(e_out) # Permute final tensor as necessary
print(n.tensor)
Currently, we support JAX, TensorFlow, PyTorch and NumPy as TensorNetwork backends.
We also support tensors with Abelian symmetries via a symmetric
backend, see the reference
documentation for more details.
To change the default global backend, you can do:
tn.set_default_backend("jax") # tensorflow, pytorch, numpy, symmetric
Or, if you only want to change the backend for a single Node
, you can do:
tn.Node(tensor, backend="jax")
If you want to run your contractions on a GPU, we highly recommend using JAX, as it has the closet API to NumPy.
This library is in alpha and will be going through a lot of breaking changes. While releases will be stable enough for research, we do not recommend using this in any production environment yet.
TensorNetwork is not an official Google product. Copyright 2019 The TensorNetwork Developers.