A proposal for a named tensor for PyTorch described in the blog post:
http://nlp.seas.harvard.edu/NamedTensor
Currently the library targets the PyTorch ecosystem and Python >= 3.6.
from namedtensor import ntorch
All pytorch builders have an extra keyword argument names.
x = ntorch.randn(10, 10, 20, names=("batch", "h", "w"))
x = ntorch.ones(10, 10, 20, names=("batch", "h", "w"))
All functions that keep dimensionality work in the same way.
x = x.log()
x = x.float()
x = ntorch.exp(x)
View, tranpose, and friends are deprecated in favor of named access and movement.
x = x.stack(("w", "h"), "stackdim")
# Roundtrip
x = x.split("stackdim", ("w", "h"), w=20)
Transposition:
x = x.transpose("batch", "w", "h")
# or early dim stay in place
x = x.transpose("w", "h")
Any function with a dim
argument now can be accessed based on the
dimension name.
x = x.narrow("w", 0, 10)
x = x.softmax("w")
This is true of reductions functions as well, where the named dimension is eliminated.
x = x.mean("w")
x, argmax = x.max("w")
Matrix operations also use the dimension arguments. We can replace einsum based on persistent names.
x = ntorch.randn(10, 10, 20, names=("batch", "h", "w"))
y = ntorch.randn(10, 20, 30, names=("batch", "w", "c"))
x.dot("w", y)
This also makes indexing much easier to read.
x = ntorch.ones(10, 10, 20, names=("batch", "time", "vocab"))
y = ntorch.randn(20, 30, names=("vocab", "embsize"))
y.index_select("vocab", x)
This api part is a work in progress. But many units are implemented to work with named tensor.
linear = ntorch.nn.Linear(20, 25)
x = linear(x)
# or
linear.rename(wout="w")
x = linear(x)
- Named NN
- Named Distributions libary
http://nlp.seas.harvard.edu/namedtensor/
- Alexander Rush (srush@seas.harvard.edu, @harvardnlp)
- Yuntian Deng
- Francisco Rivera
- Jiafeng Chen
- Celine Liang
- Miro Furtado