openbci-archive/OpenBCI_NodeJS

Simulator does not simulate packet fragmentation

Closed this issue · 6 comments

Some functionality works on the simulator, but not always on the real board (#99). Sometimes this is due to packet fragmentation, which the simulator does not simulate.

The simulator emits data inside the write() function, which produces a different call order from reality (the board will respond in a future tick, after write() has returned, and with fragmentation).

I've attempted to implement fragmentation simulation at https://github.com/baffo32/OpenBCI_NodeJS/tree/1.4.0-simulate-fragmentation , but I am unsure of the change because now so many tests are failing. EDIT: working great, see #101

  • implement simulation
  • verify with all tests
  • deal with upper/lower case of options, see commit comment
  • I hacked up the tests to work again but accidentally lost my work in a faulty use of rebase, recover this lost work and publicize it
  • update changelog.md to include this and also #98
  • address concept of full buffer .. the chip's buffer is supposed to fill up and then empty with a full chunk; current approach of emptying only what is needed is intended to emulate noisy environment. reconcile two approaches?
  • add config options for latency/speed and buffer size
  • add tests to create coverage of new code (and verify its functionality of course)

This should also address #63

I think #63 might need a more advanced implementation ... After reading up on things a little, I see two behaviors possibly:

  • chip-like, data comes when buffer (62 bytes) is full, or when timer (default 16 ms, configurable 1 ms to 255 ms) expires, or when chip is triggered
  • mac/win driver-like, data comes when buffer (default 4096 bytes, hackable down to 64 bytes) is full, perhaps also when timer expires or from trigger

I'm trying here to simulate packet fragmentation I saw on my live openbci, which I assumed was from my noisy environment, because it was different on every run, resulting in random chopping in data.

A better implementation might perhaps allow configuring the timer and the buffer size, and then also throw in random delays so that the 'cutting from noise' effect occurs as well. I'm not sure if I understand the actual behavior correctly or not, though. But I could implement what I have described.

Really I thought for testing the 'OneByOne' mode would simply cover all possible places where fragmentation would occur. It doesn't cover the opposite possibility of many packets being squished together up to 4096 bytes long; I'm not sure how to test both of these, the test suites would have to be run twice ...

Maybe the right approach is to add new fragmentation modes, such as 'chipDefault', 'driverDefault', 'driverReconfigured', and let 'random' and 'oneByOne' emulate noise separately.

I hacked up the tests to work again but accidentally lost my work in a faulty use of rebase, recover this lost work and publicize it

This sounds so sad.. sorry.

Yea i have seen all different types of data sizes come in, it drove me up a wall when i was first trying to write this module. That's why i wrote the process byte method over and over again and again to get it robust enough to work with any size data event from the serial port. I'm not sure the right way to emulate that nuance. Are you seeing problems with the _processBytes function?

So i would think the goal of these changes is to have sample data emitted at random intervals and in different amounts right?

I have been running into issues with _processBytes, and it's the reason I began this change.

The particular protocol of the OpenBCI is not conducive to buffered serial communication, because text replies have no mark that initiates them, and some packets are undocumented, so there is no way to know for sure yet what packet you have simply by looking at the input.

Usually when coding a protocol like this, every packet will have some identifier at the start that lets you know how long it will be. Then you wait until you have buffered enough data for the length of the packet, and send only that much data off to the handler, using the remaining data to begin buffering the next packet. This is some work to implement, but it is very reliable

The OpenBCI needs some fancier implementation because of the different possible forms of incoming data. The current implementation is doing that by trying to keep track of what 'state' the remote device is in, but it isn't fully robust to arbitrarily-split packets yet. See: my question in #ultracortex, and commits 884f2f1 and 0f7bb33.

So i would think the goal of these changes is to have sample data emitted at random intervals and in different amounts right?

Yes. The goal is to have tests catch what happens in reality, which is that all data may be squished together or split apart, and the data handler will get it in this form. Basically this whole issue is to create a test case for the fix in 884f2f1 which was the first issue I discovered investigating the real-board test failures.

fixed in 1.4.0