N0ury/dmm_util

Issues with output

Closed this issue · 11 comments

Below is a typical recording line read from DMU, read values are: date/time, start_ts, PRIMARY, MAXIMUM, AVERAGE, MINIMUM, DURATION, record type - in this order
2022-02-16 23:38:01 +0100 0.903 ADC 0.9031 ADC 2418.8643 ADC 0.8815000000000001 ADC 267.90000000000003 STABLE

Issues:

  1. Lots of extra zeros, which could be removed
  2. start_ts is always +0100, probably DMU returned value cannot be treated as time in line 226
  3. Why AVERAGE is 2418.8643 (amps!) - it is impossible.
N0ury commented

start_ts is not +0100, it is 2022-02-16 23:38:01 +0100

Can you please try this script and tell me what you think about it.
It's a first try.

Thank you! Here is sample output I get:

Recording 1 (detail) [2021-08-28 19:08:11 Central European Standard Time+0100 - 2021-08-28 23:07:34 Central European Standard Time+0100] : 17 measurements
2021-08-28 19:08:11 Central European Standard Time+0100 12.406 VDC 12.407 VDC 107398.37 VDC 12.405 VDC 865.7 INTERVAL
2021-08-28 19:23:11 Central European Standard Time+0100 12.405 VDC 12.406 VDC 107407.785 VDC 12.402 VDC 865.9 INTERVAL

Superfluous zeros are gone. I would remove timezone info altogether. I do not think Fluke stores it. I mean, my 289 does not have timezone setup option.
Average value still has a problem.

btw, I would also update README and ABOUT section. This utility is not only "for downloading saved measurements".
Adding some tags would probably increase discoverability.

N0ury commented

I have updated the master branch. You can get the latest script.

I can't reproduce the average issue. It's all ok for me.
I use a FLUKE 287. Is this the reason?
Does this happen with recordings option?
Are data correct when you display trend on the meter?

N0ury commented

I'm not sure if I will succeed but please try this debug version
and redirect the output to a file
I'll try to see what's happening.

The new script runs many times slower, sometimes it fails after several minutes. I have tried more than once, it does not seem like it fails at the same record every time. Currently it still runs since 30 min ago and I need to leave for the weekend. Results attached.

Traceback (most recent call last):
  File "Fluke289-C.py", line 524, in <module>
    switch[sys.argv[2]]()
  File "Fluke289-C.py", line 409, in do_recordings
    measurement = qsrr(str(recording['reading_index']), str(k))
  File "Fluke289-C.py", line 114, in qsrr
    raise ValueError('By app: Invalid block size: %d should be 146' % (len(res)))
ValueError: By app: Invalid block size: 55 should be 146

Fluke-C1.txt

N0ury commented

@tdjastrzebski can you please get the wrong_average branch (or only the script) and try the new script.
It should solve the average issue.
Timezone has also been removed.
Please let me know if it's ok now.

It looks very promising although:

  1. DURATION now misses decimal point (e.g. 8657 instead of 865.7)
  2. the script fails at recording 5 record 6:
Traceback (most recent call last):
  File "Fluke289-D.py", line 528, in <module>
    switch[sys.argv[2]]()
  File "Fluke289-D.py", line 416, in do_recordings
    / measurement['duration'],
ZeroDivisionError: float division by zero
N0ury commented

Thanks for testing.
Very strange data returned by the 289. Duration is Zero, and min, max are very high!
This should never happen. I have added a test before dividing.

About the duration:
I think it was wrong.
It was wrongly divided by 10.
In fact, the average value returned by the dmm is the sum of the values during a certain period.
This period is made up of duration samples.

Can you please try the new script in the wrong_average branch, and tell me what you think.

I would say returning duration as a number of elapsed seconds, rather than sample count would make more sense.
Recording 5 started at 23:33:02 and ended 19:50:11 next day (5 min interval) so DURATION interpreted as seconds should add up to ~20h - which it does. But it is also possible that 289 reads @10hz so sample count/10 is equivalent to seconds.
I suspect that 9.99999999e+37 values indicate overload.
I often got random Invalid block size: xxx should be 146 errors. I tried slowing COM down but it seems Fluke 289 supports 115200bps only. What seems to help is timeout increase to 100ms in line 522. 'invalid block size' errors still happen but noticeably less frequently. Would it be possible to add some retry loop? - providing that the download can be restarted at a given sample. Recordings attached.
Fluke-E4.txt

N0ury commented

287 and 289 read at nearly 10Hz. So there are ~10 samples per second.

The most annoying is the random error.
I have never used much samples and long time.
I'm going to create a big recording to check this.
Can you please open a new issue for this point.