TypeError: 'NoneType' object is not iterable
SeekPoint opened this issue · 2 comments
Train on 16686 samples, validate on 1854 samples
Epoch 1/1
16686/16686 [==============================] - 1s - loss: 0.0060 - val_loss: 0.0340Fitting epoch 2000
2016-10-28 07:51:05 -- Epoch 1999 Loss = 0.0060, Validation Loss = 0.0340 (Best: Loss = 0.0102, Epoch = 152)
Train on 16686 samples, validate on 1854 samples
Epoch 1/1
[====================]]====================] - 1s - loss: 0.0061 - val_loss: 0.0337
2016-10-28 07:51:07 -- Epoch 2000 Loss = 0.0061, Validation Loss = 0.0337 (Best: Loss = 0.0102, Epoch = 152)
----- test1 -----
Top-1 Precision: 0.117778
MRR: 0.207952
----- test2 -----
Top-1 Precision: 0.121111
MRR: 0.212116
----- dev -----
Top-1 Precision: 0.129000
MRR: 0.216403
Traceback (most recent call last):
File "insurance_qa_eval.py", line 262, in
top1, mrr = evaluator.get_score(verbose=False)
TypeError: 'NoneType' object is not iterable
rzai@rzai00:/prj/keras-language-modeling$/prj/keras-language-modeling$
rzai@rzai00:
I got the same error yesterday (10-30-2016)
Train on 16686 samples, validate on 1854 samples
Epoch 1/1
16686/16686 [==============================] - 24s - loss: 0.0000e+00 - val_loss: 0.0000e+00
2016-10-31 11:29:02 -- Epoch 2000 Loss = 0.0000, Validation Loss = 0.0000 (Best: Loss = 0.0000, Epoch = 1726)
----- test1 -----
[====================]Top-1 Precision: 1.000000
MRR: 1.000000
----- test2 -----
[====================]Top-1 Precision: 1.000000
MRR: 1.000000
----- dev -----
[====================]Top-1 Precision: 1.000000
MRR: 1.000000
Traceback (most recent call last):
File "insurance_qa_eval.py", line 262, in
top1, mrr = evaluator.get_score(verbose=False)
TypeError: 'NoneType' object is not iterable
I ran into the same issue as well and I just found the problem. Evaluator.get_score()
does not return anything, but the script tries to assign variables when calling that function: top1, mrr = evaluator.get_score(verbose=False)
https://github.com/codekansas/keras-language-modeling/blob/master/insurance_qa_eval.py#L262
It's an easy fix. Will submit a pull request.