Collecting from the Pytorch Tutorial and the other sources (updating)
I have been listing some useful and common operations here to easily keep traces when coding
-
Tensor:
- Given x and y are Tensor matrices : Add operation:
x + y torch.add(x,y, out=result) y.add_(x)
Any operations that mutates a tensor in-place is post-fixed with and ' _ '. For example:
x.copy_()y- More than 100 operations of torch in this doc
- Converting Torch Tensor to Numpy and versa
# torch tensor to numpy a = torch.ones(5) b = a.numpy() # numpy to torch tensor a = np.ones(5) b = torch.from_numpy(a)
- Resize/reshape: use
torch.view()-z = x.view(-1, n) #the size -1 inferred from other dimensons - Tensors can be moved onto GPU using
.cuda()method
if torch.cuda.is_available(): x = x.cuda() y = y.cuda() x + y
torch.mm(mat1, mat2, output=None)multiplies two matrixmat1andmat2torch.mv(mat1, vec1, output=None)multiplies a matrixmat1andvec1torch.clamp(input, min, max, output)clamp all elements ininputinto the range[min, max]and return a resulting tensor
The autograd package provides automatic differentiation for all operations on Tensors
