urish/node-lsl

[Discussion] How should asynchronous stream reading work?

Closed this issue · 5 comments

Hey there,

I've just implemented an early version of a StreamInlet class that allows data to be read from an LSL stream. In order to make the API "JavaScript-y" I would like to make this an EventEmitter.

However, I'm not sure what the best way to handle the asynchronous of that EventEmitter is considering that there isn't any async functions built into LSL.

In my current implementation, I've just used a simple setInterval in order to make the class wait 1 second between attempts to read chunks off the stream:

streamChunks(interval, timeout = 0.0, maxSamples = 1024, errCode = 0) {
        const sampleBuffer = new FloatArray(maxSamples * this.channelCount);
        const timestampBuffer = new DoubleArray(maxSamples);
        const read = () => {
            process.nextTick(() => {
                if (this.isStreaming) {
                    lsl.lsl_pull_chunk_f(
                        this.inlet,
                        sampleBuffer,
                        timestampBuffer,
                        sampleBuffer.length,
                        timestampBuffer.length,
                        timeout,
                        errCode,
                    );
                    this.emit('chunk', { samples: sampleBuffer.toJSON().reduce((acc, curr) => {}, []), timestamps: timestampBuffer.toJSON() });
                }
            });
        };
        try {
            this.isStreaming = true;
            this.interval = setInterval(read, interval);
        } catch (e) {
            console.error(e);
            this.close();
        }
    }
urish commented

This approach will introduce 1 second latency in the stream, which might not be acceptable for some applications.

One approach would be to make the chunk size / interval configurable, or to derive it from the sample rate and chunk size (e.g. if the sample rate is 500hz, and the user want a chunk size of 10, it makes sense to fire the timer every 20ms or so)

Another approach would be to use ES2015 iterators (or ES2018 async iterators), allowing the user to specify when to pull the next chunk.

How does lsl_pull_chunk_f behave when no data is available yet (i.e. if it has timed out)?
Does it return the number of samples actually read or so?
Also, are you sure errCode is not a pointer to a variable where the error code will be stored?

I implemented the EventEmitter based on a derived interval from a chunkSize parameter. It appears to work well and gives me a full reliable 256 hz sampling rate from muse with 12 samples coming in every ~50ms.

Here's the API calls to do it, which I think is pretty simple:

const lsl = require('../index');

const CHUNK_SIZE = 12;

// Resolve an LSL stream with type='EEG'
const streams = lsl.resolve_byprop('type', 'EEG');

console.log('Resolved ', streams.length, ' streams of EEG: ', streams.map((info) => info.getName()));

console.log('Connecting...');
streamInlet = new lsl.StreamInlet(streams[0]);
streamInlet.streamChunks(CHUNK_SIZE);
streamInlet.on('chunk', console.log);
streamInlet.on('closed', console.log);

@jdpigeon could you please share your modified node-lsl package with StreamInlet implemented?

Hey @Naresh1318, checkout the PR here: #3

The reason this hasn't been merged with @urish's repo is because I haven't figured out how to mock LSL streams for tests. If you want to just use my fork I'd suggest giving it a shot. It's been working well for me in a project that I'm working on here: https://github.com/makebrainwaves/neurolearning

Thanks a lot for your help! We were able to build an electron app that allowed students to play pong. Check it out here if you are interested: https://video.vt.edu/media/Neuroscience+students+play+Pong+with+their+brains/1_iisvk154/91886971?utm_source=cmpgn_news&utm_medium=email&utm_campaign=vtUnirelNewsDailyCMP_040819-fs1