niklaskorz/audio3d

Abstract audio implementation

niklaskorz opened this issue · 4 comments

To be able to switch out the implementation used for the 3D audio simulation (PannerNode, Binaural HRIR, Resonance Audio), a more abstract / generalized interface for audio nodes, listener nodes as well as the "room" has to be used. These should support a general superset of settings available, so the more capable implementations can make use of all options while the less capable only use the ones that are supported.
Also, these have to integrated into the Three.js scene graph so the audio nodes can be used as a child node of the actual "3d" object they belong to.

As this is relevant for some implementations, the direction the audio node is facing should be visualized in the editor.

We still have to decide whether the audio implementation should be exchangeable during execution or in the editor before starting execution.

Based on the settings available in the Web Audio API, the following settings should be available when using the browser's PannerNode:

  • panningModel: HRTF or equalpower
  • distanceModel: linear, inverse, exponential

Settings when using BinauralFIR: None

Settings when using Resonance Audio:

  • Room materials and dimensions (already implemented)
  • rollof
  • Ambisonic order

Some of these have to be set on a per-node level (PannerNode, ResonanceAudio.Source etc), but I think it makes more sense to have them all set project-wide to avoid confusion.

Fully implemented