bycycle-tools/bycycle

ValueError: cannot convert float NaN to integer

rajatsaxena opened this issue · 6 comments

I am trying to use compute_feature function using the following argument:

burst_kwargs = {'amplitude_fraction_threshold': .2,
                'amplitude_consistency_threshold': .5,
                'period_consistency_threshold': .5,
                'monotonicity_threshold': .8,
                'N_cycles_min': 3}

narrowband_kwargs = {'N_seconds': .5}

but I keep on running in the following error. Any suggestions on how to fix it?

Traceback (most recent call last):

  File "<ipython-input-61-da442dc80175>", line 6, in <module>
    hilbert_increase_N=True)

  File "C:\ProgramData\Anaconda3\lib\site-packages\bycycle\features.py", line 128, in compute_features
    zeroxR, zeroxD = find_zerox(x, Ps, Ts)

  File "C:\ProgramData\Anaconda3\lib\site-packages\bycycle\cyclepoints.py", line 202, in find_zerox
    zeroxD[i] = Ps[i] + int(np.median(_fzerofall(x_temp)))

ValueError: cannot convert float NaN to integer

Can you send us a snippet of the data that you're analyzing when you run into this error?

@nschawor have you seen this before?

Hey @rajatsaxena! yeah, it's hard to tell from there. Like Brad said, if you attach a stand-alone script that reproduces the error, that'll help us figure it out. If it includes code to simulate some data that reproduces the issue, that'll be nice. Otherwise, a link to some sort of text file would work too.

Looking at the error, it might be an issue with how bycycle is filtering the data and introducing NaNs.

Here is the link to the data: https://drive.google.com/file/d/1Fdky_-ARXetMIVa59jA6KqpaWZ-0g_nq/view?usp=sharing along with the code snippet to recreate the error.

import numpy as np
from bycycle.features import compute_features

signal = np.load('ob.npy')
Fs = 3000
f_gamma = (40,100)

burst_kwargs = {'amplitude_fraction_threshold': .2,
                'amplitude_consistency_threshold': .5,
                'period_consistency_threshold': .5,
                'monotonicity_threshold': .8,
                'N_cycles_min': 3}
narrowband_kwargs = {'N_seconds': .5}

df = compute_features(signal, Fs, f_gamma,
                      center_extrema='T',
                      burst_detection_method='cycles',
                      burst_detection_kwargs=burst_kwargs,
                      find_extrema_kwargs={'filter_kwargs': narrowband_kwargs})

Sorry for the super delayed response.

Yeah, this is a really weird (and rare) spot to break. It comes down to the _fzerorise() function, which is supposed to find the point between a trough and the next peak at which the signal crosses 0.

def _fzerorise(data):
    """Find zerocrossings on rising edge of a filtered signal"""
    pos = data < 0
    return (pos[:-1] & ~pos[1:]).nonzero()[0]

For example, for input [-1352. -1340. -1176. -908. -641. -475. -385. -288. -120. 142. 467. 759. 941. 1045. 1130. 1237. 1336. 1352.], we get the output [8], meaning that at index 8, we are about to cross 0. I output a list here because in noisy signals, where the oscillation is not apparent, we can have multiple zerocrossings.

However, for that cycle that it's breaking on, the "peak" and "trough" have the same exact voltage (i.e., there really isn't a gamma oscillation at this time). So the input to _fzerorise() is [ 0. 51. 113. 126. 85. 31. 0.] and the output is an empty list.

Because this cycle feature should be coupled with oscillation detection, this data point should ultimately be thrown out anyway (if there was a gamma oscillation, the peak and trough would not be the same voltage). Therefore, I think it'll be suitable to just add a dummy output when this happens.

Can you clone the repo and check out the branch "no_oscillation_error" in PR #40 to make sure this works for you now before we merge it? I tested it in a notebook with your data, and it no longer errored for me.

I hope this response wasn't too late.

Hi,
I was running into the same error (running version 0.1.2) I have added the piece you suggests and now it works. Just so you know, and thanks!

addressed in #40