Index out of bounds error in model.fit
Opened this issue · 0 comments
ilteralp commented
I have been trying to run usage.ipynb
notebook. Even if I fix the error in #46 as described in #46 , the line accuracy, loss, t_step = model.fit(X_train, y_train, X_val, y_val)
under 3 Graph ConvNet header gives the below error:
NN architecture
input: M_0 = 112
layer 1: cgconv1
representation: M_0 * F_1 / p_1 = 112 * 32 / 4 = 896
weights: F_0 * F_1 * K_1 = 1 * 32 * 20 = 640
biases: F_1 = 32
layer 2: cgconv2
representation: M_1 * F_2 / p_2 = 28 * 64 / 2 = 896
weights: F_1 * F_2 * K_2 = 32 * 64 * 20 = 40960
biases: F_2 = 64
layer 3: fc1
representation: M_3 = 512
weights: M_2 * M_3 = 896 * 512 = 458752
biases: M_3 = 512
layer 4: logits (softmax)
representation: M_4 = 3
weights: M_3 * M_4 = 512 * 3 = 1536
biases: M_4 = 3
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-8-cbcc65c6f533> in <module>()
1 model = models.cgcnn(L, **params)
----> 2 accuracy, loss, t_step = model.fit(X_train, y_train, X_val, y_val)
~\gcn\cnn_graph\lib\models.py in fit(self, train_data, train_labels, val_data, val_labels)
103 idx = [indices.popleft() for i in range(self.batch_size)]
104
--> 105 batch_data, batch_labels = train_data[idx,:], train_labels[idx]
106 if type(batch_data) is not np.ndarray:
107 batch_data = batch_data.toarray() # convert sparse matrices
IndexError: index 1665 is out of bounds for axis 0 with size 100
I changed the below line (https://github.com/mdeff/cnn_graph/blob/master/lib/models.py#L102) to fix index is out of bounds error from
indices.extend(np.random.permutation(train_data.shape[0]))
to
indices.extend(np.random.permutation(self.batch_size))
.
This gives the error below:
NN architecture
input: M_0 = 112
layer 1: cgconv1
representation: M_0 * F_1 / p_1 = 112 * 32 / 4 = 896
weights: F_0 * F_1 * K_1 = 1 * 32 * 20 = 640
biases: F_1 = 32
layer 2: cgconv2
representation: M_1 * F_2 / p_2 = 28 * 64 / 2 = 896
weights: F_1 * F_2 * K_2 = 32 * 64 * 20 = 40960
biases: F_2 = 64
layer 3: fc1
representation: M_3 = 512
weights: M_2 * M_3 = 896 * 512 = 458752
biases: M_3 = 512
layer 4: logits (softmax)
representation: M_4 = 3
weights: M_3 * M_4 = 512 * 3 = 1536
biases: M_4 = 3
step 200 / 2000 (epoch 4.00 / 40):
learning_rate = 8.57e-04, loss_average = 1.54e+00
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-cbcc65c6f533> in <module>()
1 model = models.cgcnn(L, **params)
----> 2 accuracy, loss, t_step = model.fit(X_train, y_train, X_val, y_val)
~\gcn\cnn_graph\lib\models.py in fit(self, train_data, train_labels, val_data, val_labels)
116 print('step {} / {} (epoch {:.2f} / {}):'.format(step, num_steps, epoch, self.num_epochs))
117 print(' learning_rate = {:.2e}, loss_average = {:.2e}'.format(learning_rate, loss_average))
--> 118 string, accuracy, f1, loss = self.evaluate(val_data, val_labels, sess)
119 accuracies.append(accuracy)
120 losses.append(loss)
~\gcn\cnn_graph\lib\models.py in evaluate(self, data, labels, sess)
70 """
71 t_process, t_wall = time.process_time(), time.time()
---> 72 predictions, loss = self.predict(data, labels, sess)
73 #print(predictions)
74 ncorrects = sum(predictions == labels)
~\gcn\cnn_graph\lib\models.py in predict(self, data, labels, sess)
41 if labels is not None:
42 batch_labels = np.zeros(self.batch_size)
---> 43 batch_labels[:end-begin] = labels[begin:end]
44 feed_dict[self.ph_labels] = batch_labels
45 batch_pred, batch_loss = sess.run([self.op_prediction, self.op_loss], feed_dict)
ValueError: could not broadcast input array from shape (0) into shape (100)