automata/noflo-webaudio

Which is the best design?

automata opened this issue · 7 comments

I was thinking about how to properly design Web Audio components.

As a first approach (already implemented in existing components like Gain) each component has an audio port as its input and another one as its output: when it receives data on audio input it connects the incoming component to its own audio node and passes itself to the output audio port.

An alternative approach mimics the connect/disconnect events available on the Web Audio API. We could use on attach and detach port events to mimic Web Audio's connect/disconnect. However, how to access the component who is attaching to that component?

Another approach is based on noflo-canvas: IIPs are treated as commands that are lazy-parsed-evaluated in a common Destination component.

The first two approaches are interesting to change parameters on-the-flow: having the reference for the audio node stored on each component, it is easy to change its parameters. It could be tricky in the third approach.

It is interesting to note how difficult is to adapt Web Audio API to FBP, I guess the main point is that we don't have the raw audio samples being sent through the edges, instead we have an OOP design abstracting the low level audio processing.

Which approach do you prefer @forresto?

The approach that I've been wanting to try is to make a separate webaudio
runtime. So instead of noflo-webaudio, webaudio-flow. This would require a
similar pattern to how microflo graphs are embedded in noflo graphs.

Would be good for Seriously as well.

The MicroFlo pattern is currently entirely NoFlo<->MicroFlo specific. We want to generalize this based on the FBP runtime protocol as a "remote subgraph" feature, but it is not there yet.

Do you think it makes sense to push for client-side "remote" subgraphs
within client-side noflo?

Web Audio and Seriously.js are two examples that already have a dataflow
implementation under the hood. Instead of wrapping everything in noflo
components, we can just sync via the protocol.

I really want to try and explore MicroFlo more, but AFAIK the main idea is to interpret messages in FBP protocol sent by noflo-ui to an specific "interpreter" runtime, right? So, for Web Audio and Seriously.js we could have a similar "interpreter".

It remembers the Draw component to me, but instead of a specialized runtime interpreting FBP we have a component interpreting lispy-json commands.

Am I right or at least not so crazy? 😺

@forresto: it might, but we would need to fix/improve the cross-runtime communication to not have regression in functionality. This is desirable anyways though. Having particular behavior on node add/remove, network start/stop, or to optimize execution of the graph, will become significantly easier...

@automata: I don't think "interpreter" is a good word. One needs a runtime, that has components, can manage and run a graph of instantiated components as a network, and to respond to the commands defined by FBP protocol to change the graphs/networks/components.

Should probably just try it out on for instance WebAudio. We should make sure that we can reuse sizable amounts of the NoFlo code though, at least Noflo.Graph and the runtime-communication code.

@jonnor: I really want to start a WebAudio runtime prototype. Can you give me some points to follow? The current iframe runtime of noflo-ui is noflo-runtime-iframe, right? Should I start a fork of it or from noflo-runtime-base?