Quantization error: Model file cannot be opened for loading!
saxelsen opened this issue · 2 comments
saxelsen commented
Great library! I have an issue with quantization.
When I try to run the quantization example:
from pyfasttext import FastText
model = FastText()
model.quantize(input='data/input_data.txt', output='artifacts/quantized_model',
epoch=40, lr=0.3,
dim=80, minn=2,
maxn=3,
label='__label__',
wordNgrams=2)
The python process exits with code 139 and the following error message:
Model file cannot be opened for loading!
------------------------------------------------------------------------
0 signals.cpython-35m-darwin.so 0x0000000110fbcbb8 sigdie + 120
1 signals.cpython-35m-darwin.so 0x0000000110fbcaef cysigs_signal_handler + 351
2 libsystem_platform.dylib 0x00007fff5c0d4f5a _sigtramp + 26
3 libc++abi.dylib 0x00007fff59efe467 GCC_except_table51 + 119
4 pyfasttext.cpython-35m-darwin.so 0x00000001100f6519 _ZL39__pyx_pf_10pyfasttext_8FastText_36trainP31__pyx_obj_10pyfasttext_FastTextP7_objectS2_ + 6905
5 pyfasttext.cpython-35m-darwin.so 0x00000001100f0c8f _ZL39__pyx_pw_10pyfasttext_8FastText_37trainP7_objectS0_S0_ + 111
6 python 0x000000010f7de8fe PyCFunction_Call + 62
7 pyfasttext.cpython-35m-darwin.so 0x00000001100e1f71 _ZL19__Pyx_PyObject_CallP7_objectS0_S0_ + 97
8 pyfasttext.cpython-35m-darwin.so 0x00000001100f1573 _ZL42__pyx_pw_10pyfasttext_8FastText_45quantizeP7_objectS0_S0_ + 227
9 python 0x000000010f7de8fe PyCFunction_Call + 62
10 python 0x000000010f85f357 PyEval_EvalFrameEx + 23159
11 python 0x000000010f8624ab _PyEval_EvalCodeWithName + 3115
12 python 0x000000010f85988c PyEval_EvalCode + 44
13 python 0x000000010f88881d PyRun_FileExFlags + 205
14 python 0x000000010f887d88 PyRun_SimpleFileExFlags + 280
15 python 0x000000010f8a00e6 Py_Main + 2982
16 python 0x000000010f78f128 main + 232
17 libdyld.dylib 0x00007fff5be53115 start + 1
18 ??? 0x0000000000000002 0x0 + 2
------------------------------------------------------------------------
Unhandled SIGSEGV: A segmentation fault occurred.
This probably occurred because a *compiled* module has a bug
in it and is not properly wrapped with sig_on(), sig_off().
Python will now terminate.
------------------------------------------------------------------------
vrasneur commented
Hum... You can quantize a model only after you have trained it, or loaded it.
So you should do:
from pyfasttext import FastText
model = FastText()
model.supervised(input='data/input_data.txt', output='artifacts/quantized_model',
epoch=40, lr=0.3,
dim=80, minn=2,
maxn=3,
label='__label__',
wordNgrams=2)
model.quantize(input='data/input_data.txt', output='artifacts/quantized_model')
saxelsen commented
Ah that seems to work, thanks!