MikhailStartsev/deep_em_classifier

TypeError: `pad_width` must be of integral type.

Closed this issue · 3 comments

Hi, I'm running into this problem.

python3 blstm_model_run.py --input features.arff --output results.arff --model data/models --feat speed direction

Traceback (most recent call last): File "blstm_model_run.py", line 248, in <module> run(parse_args()) File "blstm_model_run.py", line 58, in run predictions, _ = blstm_model.evaluate_test(model=model, File "/Users/richardbarana/PycharmProjects/lab/deep_em_classifier/blstm_model.py", line 350, in evaluate_test padded_x = np.pad(x_item, (padding_size_x, (0, 0)), 'reflect') # x is padded with reflections to limit artifacts File "<__array_function__ internals>", line 5, in pad File "/Users/richardbarana/miniconda3/envs/lab/lib/python3.8/site-packages/numpy/lib/arraypad.py", line 743, in pad raise TypeError('pad_width must be of integral type.') TypeError: pad_width must be of integral type.

How can I fix it, please?

Hi! I suspect this may be because of the python 3 vs 2 differences - the code was originally written for 2.7.
Specifically, please try changing this line: https://github.com/MikhailStartsev/deep_em_classifier/blob/master/blstm_model_run.py#L50 by replacing / with // (for integer division).
Same for the two divisions here (this line and the next):

padding_size_x = [elem + window_length / 2 for elem in padding_size_x]

I hope this helps!

Hi, thanks!! Sorry it took me some time to test and answer. But then I got another issue.

Traceback (most recent call last): File "blstm_model_run.py", line 248, in <module> run(parse_args()) File "blstm_model_run.py", line 58, in run predictions, _ = blstm_model.evaluate_test(model=model, File "/Users/richardbarana/PycharmProjects/lab/deep_em_classifier/blstm_model.py", line 379, in evaluate_test batch_size = model.get_config()[0]['config']['batch_input_shape'][0] KeyError: 0

Hm, not sure I can immediately be of help - I do not currently access to a system where the code is working. Naively, I would suggest taking a look at what structure model.get_config() has - it likely looks differently in your set-up than it used to for me, maybe because of the TF library version change. It could be that it is nit a list now, then you would not need [0], etc. In principle, this line needs to change to something that gets the 0th element of the batch input shape, wherever it is in the config now...

I plan to take a look at migrating the code of this repo and sp_tool to python 3 and modern library versions in July...