pytorch overhead
elanmart opened this issue · 0 comments
elanmart commented
Hey @jcjohnson, sorry for opening an issue here, didn't know how to reach You without e-mailing on Your Stanford inbox.
I haven't used torch
or Lua
, but I remember some of my friends talking about your implementation of char-rnn
in Lua
. They said it was super fast.
I'm wondering if it is possible to do something like that in PyTorch
? Or the speed was thanks to Lua
's JIT compiler, and Python
interpreter will simply incur too much overhead? In general, do You think PyTorch
is suitable for applications with lots of small computations (char-lvl, pixel-lvl stuff)?