OdysseasKr/neural-disaggregator

Getting a TypeError while calling the function "metrics.recall_precision_accuracy_f1(predicted, ground_truth)"

HRafiq opened this issue · 7 comments

@OdysseasKr @ChristoferNal @Spatzi

I am trying to execute your code ukdale-test.py. I also tried to execute ipynb file "RNN-example.ipynb". All went well but when I try to call the function "metrics.recall_precision_accuracy_f1(predicted, ground_truth)" to check the performance of the model, it gives me type error like this: " TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'
". I am unable to resolve this error. Something is wrong with "predicted" file (which holds the data of building-2 [dish washer]that was generated after testing the model in HDF5 format.

Please help me out in this matter.

Uploading SS2.jpg…

Screenshots of the error are attached below:

ss1

ss2

Hi @HRafiq. Can you post a list of the packages and their versions? You can do that by running
pip freeze
Looking at your screenshots I assume that you are using Python3 with Anaconda, is that correct?

Yes I'm using Python 3.6.4 with Anaconda in Windows 10. List of packages is shown in the attached screenshots.

While trying to resolve the issue, I noticed that I was writing wrong date format in setting training and testing window size. Then, I wrote the format for training and testing dataset like this:

train.set_window(start="2013-05-21", end="2013-10-08")

When I ran the code again, model was trained and tested successfully but metrics.recall function was again giving an error. It is no more giving type-error as mentioned earlier, This time the error was "ValueError: Arrays lengths must be same". Everything looks fine to me but don't know why i'm getting this error. Is this because of versions of installed packages?

Screenshots of installed packages:

ss3

ss4

ss5

When we wrote the code for this repo, NILMTK was only working correctly with dates using pandas 0.18. They recently moved to pandas 0.22 which may have caused this issue. It will take some time for me to test with the new NILMTK code.

If you figure out the issue, feel free to submit a pull request.
If you are in a rush to get some results, you can use my fork of NILMTK or write the metrics without NILMTK imports.

So I tested the neural-disaggregator with the last version of NILMTK and Pandas. I indeed found a bug and submitted a pull request to NILMTK. You can find more info in #4

However, I am not really sure if this is related to your issue. Do you encounter the same issue with REDD as well?