harvardnlp/seq2seq-attn

extract_states.lua's error

zhang-jinyi opened this issue · 0 comments

Hi
I trained my model with input features successfully.
And I want to see the attention states,so I tried extract_states.lua ,which failed.
Here is the logs below:

zhang@zhang-XPS-8900:~/Downloads/torch-seq2seq$ th extract_states.lua -model demo-6.5-model_epoch13.00_3.72.t7 -src_file data/src-val-case.txt -targ_file data/targ-val.txt -src_dict demo.src.dict -targ_dict demo.targ.dict
loading demo-6.5-model_epoch13.00_3.72.t7...
done!
loading GOLD labels at data/targ-val.txt
SENT 1: C-|-C &-|-& D-|-D 管-|-⽵ 理-|-⽟ 施-|-⽅ 設-|-⾔ の-|-の 高-|-⾼ 度-|-⼴ 化-|-⼔
Sentence 1 11
ENCODER POS 0
/home/zhang/torch/install/bin/luajit: /home/zhang/torch/install/share/lua/5.1/nngraph/gmodule.lua:362: Got 3 inputs instead of 4
stack traceback:
[C]: in function 'error'
/home/zhang/torch/install/share/lua/5.1/nngraph/gmodule.lua:362: in function 'forward'
extract_states.lua:95: in function 'generate_beam'
extract_states.lua:524: in function 'main'
extract_states.lua:538: in main chunk
[C]: in function 'dofile'
...hang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x55b879c20450

Got 3 inputs instead of 4,
Is that means the model with input features cannot be used by extract_states.lua ?

Thanks in advance.