neurosity/eeg-pipes

Persistent filter objects in filter operators

jdpigeon opened this issue · 1 comments

I've designed two different ways to keep track of filters between buffers in the data stream, one storing the filter objects in a variable and one using the scan operator. Both of these implementations seem to eliminate the artifact issue that occured due to new filter objects being created every time a new buffer came in.

Variable storage:

source$ => {
  var options = {
    order,
    characteristic,
    Fs,
    Fc,
    BW
  };
  var notchArray = new Array(nbChannels)
    .fill(0)
    .map(x => createNotchIIR(options));
  return createPipe(
    source$,
    map(channelGroupBuffer =>
      channelGroupBuffer.map((channel, index) =>
        notchArray[index](channel)
      )
    )
  );
};

Execution time for this on 10ch x 1000 samples: 1.14ms

source =>
  createPipe(
    source,
    scan(
      (acc, curr) => {
        return [
          curr.map((channel, index) => acc[1][index].multiStep(channel)),
          acc[1]
        ];
      },
      [
        new Array(nbChannels).fill(0),
        new Array(nbChannels)
          .fill(0)
          .map(x =>
            createNotchIIR({ order, characteristic, Fs, Fc, gain, preGain, BW })
          )
      ]
    ),
    map(dataAndFilter => dataAndFilter[0]) // pluck just the data array to emit
  );

Execution time for this on 10ch x 1000 samples: 1.33ms

Both use a simple function I created for readbility

const createNotchIIR = options => {
  const calc = new CalcCascades();
  const coeffs = calc.bandstop(options);
  return new IirFilter(coeffs);
};

In my opinion, declaring the variable to hold filter arrays is much easier to understand. However, it feels a little bit less like 'the RxJS way' since the operator technically isn't pure anymore.

Curious which one we should use

We'll go with option 1 because it's a little bit more readable