XuezheMax/NeuroNLP2

AssertionError for word_dim

Sharefah-Alghamdi opened this issue · 2 comments

Hi
Thank you for sharing your code!
I'm trying to run the stack pointer parser on my own dataset and I need your help to fix this error:

===========
loading embedding: polyglot from data/polyglot-ar.pkl
2021-05-27 10:15:42,679 - Parsing - INFO - Creating Alphabets
2021-05-27 10:15:42,679 - Create Alphabets - INFO - Creating Alphabets: models/parsing/stackptr/alphabets
2021-05-27 10:15:43,070 - Create Alphabets - INFO - Total Vocabulary Size: 13914
2021-05-27 10:15:43,070 - Create Alphabets - INFO - Total Singleton Size: 6034
2021-05-27 10:15:43,073 - Create Alphabets - INFO - Total Vocabulary Size (w.o rare words): 12077
2021-05-27 10:15:43,206 - Create Alphabets - INFO - Word Alphabet Size (Singleton): 13275 (4197)
2021-05-27 10:15:43,206 - Create Alphabets - INFO - Character Alphabet Size: 75
2021-05-27 10:15:43,206 - Create Alphabets - INFO - POS Alphabet Size: 4
2021-05-27 10:15:43,206 - Create Alphabets - INFO - Type Alphabet Size: 11
2021-05-27 10:15:43,206 - Parsing - INFO - Word Alphabet Size: 13275
2021-05-27 10:15:43,207 - Parsing - INFO - Character Alphabet Size: 75
2021-05-27 10:15:43,207 - Parsing - INFO - POS Alphabet Size: 4
2021-05-27 10:15:43,207 - Parsing - INFO - Type Alphabet Size: 11
2021-05-27 10:15:43,207 - Parsing - INFO - punctuations(5): `` , : '' .
word OOV: 970
2021-05-27 10:15:43,232 - Parsing - INFO - constructing network...
Traceback (most recent call last):
File "parsing.py", line 651, in
train(args)
File "parsing.py", line 225, in train
assert word_dim == hyps['word_dim']
AssertionError

Since you were using the polyglot word embedding, you need to set the "word_dim' equal to the dim in polyglot.

Since you were using the polyglot word embedding, you need to set the "word_dim' equal to the dim in polyglot.

Thank you!
It solved the problem.