alternative audio data structures / storing ops instead of applying immediately
dy opened this issue ยท 2 comments
Following audiojs/audio-buffer-list#5.
The current API approach is covered by a lot of similar components, it is destined to insignificant competition and questionable value. The main blocker and drawback is the core - audio-buffer-list component, which does not bring a lot of value, compared to just storing linked audio-buffers.
Alternately, audio could be focused on storing editing-in-process, rather than data wrapper with linear API, similar to XRay RGA.
Principle
- storing operations rather than applying them to data
- ๐ no precision loss
- ๐ faster insertion/removal
- ๐ allows for collaborative editing
- ๐ allows for faster re/adjusting params of applied control/envelope
- ๐ possibly somewhat slower playback due to applied transforms stack, hopefully having heavy-duty fx is not a part of editing process
- ! possibly compiling fx program dynamically, akin to regl
- ! pre-rendering audio for faster playback
- undo/redo history methods store operations, not full binary replica every step
- branching - allows for alternatives
Pros
- ๐ makes audio unique
- ๐ makes it suitable for editors
Reference structures:
- https://github.com/coast-team/mute-structs/
- https://github.com/Chat-Wane/LSEQTree
- https://github.com/atom/xray/tree/master/memo_core
In fact, git seems to be suitable for that too.
Note also that the class technically should allow to utilize any underlying technology: time series, STFT, formants, HPR, HPS SPS etc models (https://github.com/MTG/sms-tools/tree/master/software/models), wavelets etc.
๐ In the case of formants, for example, transforms are theoretically times faster than the time series.
๐ Abstract interface would discard sampleRate
param and make Audio just a time-series data wrapper, with possibly even uncertain/irregular stops. We may want to engage a separate time-series structure for that, which seems to be broadly demanded, from animation/gradient/colormap stops to compact storing of observations.
Lifecycle
- Initialize data model
- Input data source
- Convert input data source to target model
- Modify data source
- Create stack of modifiers/reducers/transforms
- Modifiers can possibly be applied real-time
- Play data source
- Apply stack of transforms, play / apply transforms per-buffer
- Get stats
- Should model include stat params straight ahead?
- Output data source
- Apply stack of transforms, output
Plan
- Collect set of concerns, use-cases of responsibility
- Come up with ideal API covering all these cases
- Create baseline/edge/real tests for the cases
- Fix tests
Stores
- time-series store
- web-assembly store
- stft store
- harmonic model + residual store
- formants store
- wavelets store
- see ref for other stores (Bercelona Institute)