Crown sampling interval is 4.0 when should be 3.90625 (sampling rate = 256 Hz) - eeg-notebooks or brainflow issue?
sjburwell opened this issue ยท 4 comments
โน Computer information
- Platform OS (e.g Windows, Mac, Linux etc): Mac
- Python Version: 3.7
- Brain Interface Used (e.g Muse, OpenBCI, Notion etc): Crown, Notion2
๐ Provide detailed reproduction steps (if any)
Not sure if I should open this issue here or in the brainflow
documentation, but:
- Running Neurosity Crown through eeg-notebooks
- Inspect the first difference in
timestamps
column in CSV to obtain sampling interval(s) - There is variability in the sampling interval (to be expected with OSC?), but more importantly, the median sampling interval is 4.0 (what was true for the Notion2 at 250 Hz sampling rate) instead of 3.90625 (what is expected for Crown at 256 Hz sampling rate).
โ๏ธ Expected result
Sampling interval for Neurosity Crown should be 3.90625 ms.
โ Actual result
Sampling interval for Neurosity Crown is 4.0, the expected sampling interval for Neurosity Notion2.
I believe the issue is upstream from brainflow
, i.e., at the level of the Neurosity SDK. I have written to the Neurosity devs.
Quick (hopefully) question: how should the "timestamps" column in the recorded *.csv files be interpreted? Are these the actual times / latencies of the recorded samples? If there is a substantial gap between consecutive timestamps (e.g., 100 milliseconds when the sampling interval should be a few milliseconds), does this indicate that there was a gap in recording (i.e., a "boundary / discontinuity" event)? Or, can the values in the timestamps column be ignored, and instead trust that each consecutive row of data in the *csv file is sampled at the correct time.
I ask because I've observed considerable variability across some boards (e.g., cyton_daisy, crown), and less variability for other boards (e.g., muse2016). Do all use LSL? OSC? and would this matter?
Hi @sjburwell sorry for getting to this late again, I need to turn on my email notifications for github.
It has been a minute since I have written the part that handles timestamps but if I remember correctly I generally trusted that each consecutive row of data was sampled at the correct time. The Neurosity devices use OSC on the Brainflow back end but I don't think the same is true about OpenBCI devices. I do remember that the reason I trusted they were coming in evenly spaced was that doing ERP/SSVEP worked on the data without any adjusting of the timestamps. The timestamps are coming directly from Brainflow in the case of any device other than the Muse so if they ARE coming in at weird intervals and not evenly spaced like I assume, we could implement a hot-fix but I think an issue would also need to be posted to the Brainflow repo to be fixed on the back end.
Thanks @JadinTredup, no problem about the delay.
I've seen varying degrees of this uneven sampling interval issue with different headsets and different recording sessions. For most cases and with boards that have been around a while (e.g., muse2016, cyton_daisy), the timestamps are fairly evenly spaced regardless of laptop/wifi network and I believe can fairly often be trusted that their observed sampling interval approximates their intended sampling interval (i.e., sample rate (Hz) / 1000 ms). For the Crown however, I have found the observed sampling interval and occasional "big gaps" (e.g., 200 ms of lost data) possibly affected by other processes on the laptop or wireless network through which the board and laptop are connected. I.e., on one laptop with non-OS processes shut down on its own wifi network, the issue is negligible; however, on another laptop with multiple applications running and multiple devices on the wifi network, the issue is bad.
I don't have a thorough understanding of how BrainFlow and Neurosity's OSC interface, but from an airplane's view, it seems like multiple processes might be using the same port, or multiple packets of data conflicting at the CPU. (The Muse and OpenBCI rigs might be less affected because perhaps they rely upon [I believe] Bluetooth and designated USB dongles?). I've corresponded with Alex about this, and he said he's looking into it, but might be worth me testing experiments through their (preferred) JavaScript API.
For now, I've written some kludge code to check for and remove problematic gaps / sampling errors in my own data, but others might want to check the timestamps in their data if they do not see ERPs/SSVEPs as expected.