ezyang/pytorch-unattached

Word Language Model cannot work with verify=True

zdevito opened this issue · 1 comments

When I try to run the verifier in the tracer, I get this error:

Traceback (most recent call last):
  File "main.py", line 167, in <module>
    train()
  File "main.py", line 139, in train
    output, hidden = model(data, hidden)
  File "/data/users/zdevito/pytorch/torch/nn/modules/module.py", line 259, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/users/zdevito/pytorch/torch/jit.py", line 238, in __call__
    return self.run_closure(trace_info, args, trace_inputs)
  File "/data/users/zdevito/pytorch/torch/jit.py", line 189, in run_closure
    cloned_args = tuple(_clone_inputs(args))
  File "/data/users/zdevito/pytorch/torch/jit.py", line 79, in _clone_inputs
    yield a.clone()
AttributeError: 'tuple' object has no attribute 'clone'

This is how I am modifyin the word language model:

model = torch.jit.traced(model, verify=True, time=True, optimize=False)

It looks like clone_inputs cannot handle nested tuples, because it doesn't use flatten: https://github.com/ezyang/pytorch/blob/jit/torch/jit.py#L74

Fixing it to use flatten is nontrivial because you have to hit both Variable and Tensor, and the default flattener only looks for Variables. Maybe this means we should change the API.