BlueBrain/eFEL

Issue with AP amplitudes and peaks

appukuttan-shailesh opened this issue · 6 comments

I am trying to evaluate features such as AP1_amp, AP2_amp, AP1_peak, AP2_peak and seem to be getting incorrect (unexpected) values. I have attached a sample data file.
data_Brian1_strong_depol_70.0.xlsx

I tried taking a closer look at the AP1_peak evaluation and find that eFEL gives me the first peak as 9.26 mV, while the actual value is ~20 mV. This is a screenshot of the trace and where eFEL is detecting the first spike (green line)
image

I tried playing around with parameters such as:

efel.setDoubleSetting('Threshold', 10)
efel.setDoubleSetting('DerivativeThreshold', 5.0)

in various combinations.... but having no luck. When I change the threshold to '10', I lose all spikes. Other combinations simply don't seem to produce much change.

Any tips on what I could do to better capture the peaks?

Just to add...
I have another data file that is very similar:
data_Brian2_strong_depol_70.0.xlsx

for which eFEL correctly identifies the peak:
image

Not totally sure why the default settings seem to work for this second file, but not the first. More importantly, unable to figure out what changes in the settings should be made for the first file to work correctly.

Hello @appukuttan-shailesh,

Thank you for reporting this issue.

In the trace you share, the drop after the spike is abrupt (less than 0.01 mV). However, eFEL re-interpolate the trace (both current and voltage) before computing the features. By default, the interpolation step is 0.1 mV, which is way larger than the drop speed we see here. And unfortunately, for this spike, the interpolated point is not on top of the spike (see 1.png).
1

If we change the interpolation setting, using the command:

efel.setDoubleSetting('interp_step', 0.01)

We get the expected result:
2

Thanks a lot @DrTaDa for both explaining and resolving this issue (and also responding so quickly).

I have some minor queries regarding the solution.... .the interp_step is in 'mV' or in 'ms'? I presume that was a typo:

(less than 0.01 mV)....... By default, the interpolation step is 0.1 mV,

(or maybe I understood this wrong?)

Maybe a naive question... but is there a reason why eFEL re-interpolates the data rather than using the give data itself? Is it specifically to handle adaptive integration with non-constant time steps?

For highest accuracy (making foolproof; no thoughts on efficiency), would the solution be to set interp_step to the simulation time step (e.g. dt = 0.02 ms)?

Hello again @appukuttan-shailesh,

Indeed, it is in ms, sorry for the typo.

I might be wrong as I did not write eFEL, but I think it is to get consistent and reproducible results across different sets of recordings.
For example, if two recordings of different sampling rate of the same exact same signal were sent to eFEL, and if eFEL did not re-interpolate, you would not get the same features although the signals would be the same.

In general, I think that interp_step should be chosen such that features that are relevant to the problem at hand are computed properly. I know it sounds like a fuzzy rule, but I think a very low interp_step could also cause issue with the computation of some features.

Thanks @DrTaDa ..... that makes sense. But if the supplied data has finer resolution than the default interp_step, I would have hoped/expected the former to be employed. Also, curious in which cases a low interp_step could prove to be a problem.

@wvangeit , @anilbey : could you provide a comment on this?

Maybe a naive question... but is there a reason why eFEL re-interpolates the data rather than using the give data itself? Is it specifically to handle adaptive integration with non-constant time steps?

For highest accuracy (making foolproof; no thoughts on efficiency), would the solution be to set interp_step to the simulation time step (e.g. dt = 0.02 ms)?

Also, curious in which cases a low interp_step could prove to be a problem.

Some features rely explicitly on a number of data points. I guess that if the discretisation is too high some features could fail. we would need to look on a per-feature basis to be sure.