th train.lua -backend nn failed!
Closed this issue · 1 comments
envy@ub1404envy:/os_prj/github/_QA/VQA_LSTM_CNN$ ll/os_prj/github/_QA/VQA_LSTM_CNN$ th train.lua -backend nn
total 5387680
drwxrwxr-x 7 envy envy 4096 Feb 18 12:33 ./
drwxrwxr-x 5 envy envy 4096 Feb 18 00:37 ../
drwxrwxr-x 4 envy envy 4096 Feb 15 17:29 data/
-rw-rw-r-- 1 envy envy 2014627936 Feb 18 12:32 data_img.h5
-rw-rw-r-- 1 envy envy 2014627936 Dec 14 00:03 data_img.h5-ori
-rw-rw-r-- 1 envy envy 84335736 Feb 18 12:03 data_prepro.h5
-rw-rw-r-- 1 envy envy 9169211 Feb 18 12:03 data_prepro.json
-rw-rw-r-- 1 envy envy 716074236 Dec 16 14:45 data_train_val.zip
-rwxrwxr-x 1 envy envy 9395 Dec 29 19:26 eval.lua*
-rwxrwxr-x 1 envy envy 741 Dec 29 19:26 evaluate.py*
drwxrwxr-x 8 envy envy 4096 Dec 29 19:26 .git/
drwxrwxr-x 2 envy envy 4096 Dec 29 19:26 misc/
drwxrwxr-x 2 envy envy 4096 Feb 17 00:18 model/
-rw-rw-r-- 1 envy envy 3005 Feb 18 12:31 path_to_cnn_prototxt.lua
-rwxrwxr-x 1 envy envy 3403 Dec 29 19:26 prepro_img.lua*
-rwxrwxr-x 1 envy envy 9279 Dec 29 19:26 prepro.py*
-rw-rw-r-- 1 envy envy 53612941 Dec 14 19:57 pretrained_lstm_train.t7
-rw-rw-r-- 1 envy envy 49743190 Dec 16 14:04 pretrained_lstm_train_val.t7.zip
-rwxrwxr-x 1 envy envy 3625 Dec 29 19:26 readme.md*
drwxrwxr-x 2 envy envy 4096 Feb 17 00:18 result/
-rwxrwxr-x 1 envy envy 10759 Dec 29 19:26 train.lua*
-rw-rw-r-- 1 envy envy 574671192 Sep 24 2014 VGG_ILSVRC_19_layers.caffemodel
-rw-rw-r-- 1 envy envy 2715 Feb 18 12:05 yknote---log--1
envy@ub1404envy:
{
learning_rate_decay_every : 50000
batch_size : 500
gpuid : 0
common_embedding_size : 1024
input_img_h5 : "data_img.h5"
input_encoding_size : 200
learning_rate_decay_start : -1
input_json : "data_prepro.json"
num_output : 1000
input_ques_h5 : "data_prepro.h5"
rnn_size : 512
max_iters : 150000
checkpoint_path : "model/"
save_checkpoint_every : 25000
learning_rate : 0.0003
img_norm : 1
backend : "nn"
rnn_layer : 2
seed : 123
}
DataLoader loading h5 file: data_prepro.h5
DataLoader loading h5 file: data_img.h5
Building the model...
shipped data function to cuda...
/home/envy/torch/install/bin/luajit: train.lua:200: index out of range at /home/envy/torch/pkg/torch/lib/TH/generic/THTensorMath.c:156
stack traceback:
[C]: in function 'index'
train.lua:200: in function 'next_batch'
train.lua:247: in function 'opfunc'
/home/envy/torch/install/share/lua/5.1/optim/rmsprop.lua:32: in function 'rmsprop'
train.lua:303: in main chunk
[C]: in function 'dofile'
...envy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
envy@ub1404envy:~/os_prj/github/_QA/VQA_LSTM_CNN$
Could you double check the data you download is consistent? This is not the nn backend error, only means data index didn't fetch right.