"torch.DoubleTensor.t not currently supported". Workarounds?
vadimkantorov opened this issue · 5 comments
I'm trying a toy example of auto-differentiated pairwise L2 distances module:
require 'nn'
autograd = require 'autograd'
function pdist(embeddings)
local pdist = torch.mm(embeddings, embeddings:t())
local norm = pdist:diag():view(pdist:size(1), 1):expandAs(pdist)
return pdist:mul(-2.0):add(norm):add(norm:t()):sqrt()
end
m = autograd.nn.AutoModule('AutoPairwiseL2')(pdist)
print(nn.Jacobian.testJacobian(m, torch.rand(50, 128)))
Unfortunately it breaks:
function: 0x41b0ef00
...fix/bin/luajit: ...fix/share/lua/5.1/autograd/runtime/direct/DirectTape.lua:50: function torch.DoubleTensor.t not currently supported by autograd
stack traceback:
[C]: in function 'error'
...fix/share/lua/5.1/autograd/runtime/direct/DirectTape.lua:50: in function 't'
test.lua:5: in function 'fun'
...fix/share/lua/5.1/autograd/runtime/direct/DirectTape.lua:112: in function 'funOnly'
...fix/share/lua/5.1/autograd/runtime/direct/DirectTape.lua:218: in function 'b'
...wigwam/prefix/share/lua/5.1/autograd/auto/AutoModule.lua:52: in function 'updateGradInput'
..._gpu101_105/.wigwam/prefix/share/lua/5.1/nn/Jacobian.lua:21: in function 'backward'
..._gpu101_105/.wigwam/prefix/share/lua/5.1/nn/Jacobian.lua:235: in function 'testJacobian'
test.lua:12: in main chunk
[C]: in function 'dofile'
...105/.wigwam/prefix/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00410a40
Any workarounds suggested?
torch.permute
is not supported too...
Since torch.transpose
is not supported in regular Torch, I had to define:
torch.transpose = function(x) return x:t() end
I think it is supported in autograd.
2016-10-21 17:02 GMT-04:00 Vadim Kantorov notifications@github.com:
Since torch.transpose is not supported in regular Torch, I had to define:
torch.transpose = function(x) return x:t() end—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#159 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGlLXB3UlmvQVaUyaTD2LK2pj9-xDIIaks5q2ShigaJpZM4KdO8s
.
Hey Vadim,
In-place operations are not really supported in torch-autograd. Could you
use the full torch primitives? i.e. embeddings:t() -> embeddingsT =
torch.transpose(embeddings) and so on.
Best,
Nico
2016-10-21 10:03 GMT-04:00 Vadim Kantorov notifications@github.com:
torch.permute is not supported too...
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#159 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGlLXC31xQpKZi0HeD6COD-IATK04p7Vks5q2MYzgaJpZM4KdO8s
.
torch.transpose
isn't supported in regular Torch (master branch from a few days ago):
a = torch.rand(5, 10)
print(a:transpose(2, 1):size())
print(torch.transpose(a, 2, 1):size())
will print:
10
5
[torch.LongStorage of size 2]
.../prefix/bin/luajit: transpose.lua:3: attempt to call field 'transpose' (a nil value)
stack traceback: ...
It is indeed supported in autograd, but it makes a nasty discrepancy between functions I have and use (even outside of autograd). If autograd supported torch.permute
, it would solve the issue, since torch.permute
exists in regular Torch.
Also, embeddings:t()
isn't done in-place (if you meant operations that modify tensors in-place), embeddings
will keep its original shape.