han_my_functions.py --> TypeError: float() argument must be a string or a number, not 'NoneType'
vedtam opened this issue · 4 comments
Hi,
First let me thank you for the detailed and really well explained HAN example! I was looking for days for such a source to get up and running with attention visualisation in NLP.
I have prepared my data as in the description, everything runs smoothly until I get to training: han.fit_generator(...
, which stops and throws:
I've noticed that it has to do something with the metrics, but couldn't figure out what's next.
Btw, is there a specific version of Keras and Tensorflow I should run this example? Currently I'm on tensorflow: 2.4.1 and keras: 2.4.3 (both being probably the latest)
Thanks!!
Hi, thank you for your interest! The code was tested with Python 3.5.5 and 3.6.1, tensorflow-gpu 1.5.0, Keras 2.2.0, and gensim 3.2.0. I guess it's probably the problem (Keras has been integrated into TF now).
I should regularly update the code, but don't have time. If you end up updating the code, feel free to make a pull request.
Thanks so much for the details. I've created an env with the above dependencies and now things work as expected. I've been trying to add my own data, which consists of 8 categories (instead of the default 5). My dataset contains 4000 training and ~380 test samples.
After preprocessing my data (using your preprocessor script), I can load the word vectors, train a model for analysing the results, but then another error pops up: max() arg is an empty sequence
, with all the acc and loss plots being blank:
If I proceed with re-initialising and training a model to get a visualisation of the document embeddings I hit an error again: operands could not be broadcast together with shapes (8,8) (7,7)
:
I've updated the n_cats=8
initialy and restarted the notebook several times, but it still throws about incompatible shapes (8,8) (7,7). I'm wondering, is this because of how the programatic batch creation? Maybe the some documents in a batch doesn't have the same size? Pff, can't figure it out.
Did you find what the problem was? It's difficult to troubleshoot this issue without a reproducible example, and I am very busy these days anyways, but could it be possibly due to your labels following a zero-based index? By default, they are assumed to follow a one-based index. Change this parameter if not:
deep_learning_NLP/HAN/preprocessing.py
Line 64 in cfd3452
@Tixierae thanks! Figured it out and got the notebook working. I'm wondering, why is there so little info (after days of searching found only your's and one more) about this approach for getting the weights over words and thus being able to explain an NLP deep learning model's behaviour? Is it really that obvious so anyone (but me) can implement it. Or is it already outdated, along deep learning NLP models (there might be a better way like transformers or something)?
Thanks!