Pure JS implementation of the Web Audio API
Fork of mohayonao/web-audio-engine with following changes:
- Use TypeScript and fix some types (#5)
- Remove
BaseAudioContext.suspend()
(#6)
- Remove
- Add new
RawDataAudioContext
(#5) - Add support for
DynamicsCompressorNode
(#1) - Bug fixes
npm install --save web-audio-engine
web-audio-engine
provides some AudioContext
class for each use-case: audio playback, rendering and simulation.
StreamAudioContext
writes raw PCM audio data to a writable node stream. It can be used to playback audio in realtime.
Creates new StreamAudioContext instance.
opts.sampleRate: number
audio sample rate (in Hz) - default: 44100opts.numberOfChannels: number
audio channels (e.g. 2: stereo) - default: 2opts.blockSize: number
samples each rendering quantum - default: 128opts.bitDepth: number
bits per sample - default: 16opts.float: boolean
use floating-point values - default: false
:constructionworker: _TODO: WRITE DESCRIPTION
import { StreamAudioContext as AudioContext } from 'web-audio-engine';
const context = new AudioContext();
// Set the output for audio streaming
context.pipe(process.stdout);
// If you want to playback sound directly in this process, you can use 'node-speaker'.
// const Speaker = require("speaker");
// context.pipe(new Speaker());
// Start to render audio
context.resume();
// composeWith(context);
RenderingAudioContext
records audio data with stepwise processing. It is used to export to a wav file or test a web audio application.
Creates new RenderingAudioContext
instance.
opts.sampleRate: number
audio sample rate (in Hz) - default: 44100opts.numberOfChannels: number
audio channels (e.g. 2: stereo) - default: 2opts.blockSize: number
samples each rendering quantum - default: 128
Executes rendering process until the provided time.
time
: e.g.10
(10 seconds),"01:30.500"
(convert to 90.5 seconds)
Exports the rendered data as AudioData
format.
Encode audio data to the binary format.
audioData: AudioData
opts.bitDepth: number
bits per sample - default: 16opts.float: boolean
use floating-point values - default: false
import fs from 'fs';
import { RenderingAudioContext as AudioContext } from 'web-audio-engine';
const context = new AudioContext();
// composeWith(context);
context.processTo('00:01:30.000');
// context.currentTime -> 90.00054421768708
context.processTo('00:02:00.000');
// context.currentTime -> 120.00072562358277
const audioData = context.exportAsAudioData();
context.encodeAudioData(audioData).then((arrayBuffer) => {
fs.writeFile('output.wav', new Buffer(arrayBuffer));
});
RawDataAudioContext
allows you to synchronously step through an AudioContext. This is useful for streaming output at
and controlling the rate
Creates new RenderingAudioContext
instance.
opts.sampleRate: number
audio sample rate (in Hz) - default: 44100opts.numberOfChannels: number
audio channels (e.g. 2: stereo) - default: 2opts.blockSize: number
samples each rendering quantum - default: 128
Renders the next blockSize
samples of audio into channelData
.
import { RawDataAudioContext } from 'web-audio-engine';
const context = new RawDataAudioContext();
const { blockSize } = context;
const channelData = [
new Float32Array(blockSize),
new Float32Array(blockSize),
];
for (let i = 0; i < 100_000; i += blockSize)
context.process(channelData);
// Do something with channeLData
}
:constructionworker: _TODO: WRITE DESCRIPTION
Creates new WebAudioContext
instance.
opts.context?: AudioContext
the native Web Audio API AudioContext instanceopts.destination?: AudioNode
- default: opts.context.destinationopts.numberOfChannels: number
audio channels (e.g. 2: stereo) - default: 2opts.blockSize: number
samples each rendering quantum - default: 128
<script src="/path/to/web-audio-engine.js"></script>
<script>
var context = new WebAudioEngine.WebAudioContext({
context: new AudioContext(),
});
// composeWith(context);
context.resume();
</script>
This context is compatible with the natvie Web Audio API OfflineAudioContext
.
import { OfflineAudioContext } from 'web-audio-engine';
const context = new OfflineAudioContext(2, 44100 * 10, 44100);
// composeWith(context);
context.startRendering().then((audioBuffer) => {
console.log(audioBuffer);
});
interface AudioData {
numberOfChannels?: number;
length?: number;
sampleRate: number;
channelData: Float32Array[];
}
The default decoder of web-audio-engine
supports "wav" format only. If you need to support other audio format, you are necessary to prepare a decoder yourself.
Returns the function for decoding currently set.
Sets the function for decoding.
decodeFn: (audioData: ArrayBuffer, opts?: object) => Promise< AudioData >
The decoding to use.
Executes decoding.
audioData: ArrayBuffer
import wae from 'web-audio-engine';
import mp3decoder from '/path/to/mp3decoder';
import fs from 'fs';
wae.decoder.set('mp3', mp3decoder);
const AudioContext = wae.RenderingAudioContext;
const context = new AudioContext();
const audioData = fs.readFileSync('amen.mp3');
context.decodeAudioData(audioData).then((audioBuffer) => {
console.log(audioBuffer);
});
The default encoder of web-audio-engine
supports "wav" format only. If you need to support other audio format, you are necessary to prepare an encoder yourself.
Returns the function for encoding currently set.
Sets the function for encoding.
encodeFn: (audioData: AudioData, opts?: object) => Promise< ArrayBuffer >
The encoding to use.
Executes encoding.
audioData: AudioData
opts.type: string
audio format type - default: "wav"
import wae from 'web-audio-engine';
import mp3encoder from '/path/to/mp3encoder';
import fs from 'fs';
wae.encoder.set('mp3', mp3encoder);
const AudioContext = wae.RenderingAudioContext;
const context = new AudioContext();
const audioData = context.exportAsAudioData();
context.encodeAudioData(audioData, { type: 'mp3' }).then((arrayBuffer) => {
fs.writeFile('output.mp3', new Buffer(arrayBuffer));
});
AnalyserNode
AudioBuffer
AudioBufferSourceNode
AudioContext
AudioDestinationNode
AudioNode
AudioParam
BiquadFilterNode
(audio rate parameter is not supported)ChannelMergerNode
ChannelSplitterNode
DelayNode
(noisy..)DynamicsCompressorNode
GainNode
IIRFIlterNode
OscillatorNode
(use wave-table synthesis, not use periodic wave)PeriodicWave
ScriptProcessorNode
StereoPannerNode
WaveShaperNode
- The other not implemented nodes will pass its input to its output without modification.
- See: Comparison Chart of implemented nodes
import Speaker from 'speaker';
import { StreamAudioContext as AudioContext } from 'web-audio-engine';
const context = new AudioContext();
const osc = context.createOscillator();
const amp = context.createGain();
osc.type = 'square';
osc.frequency.setValueAtTime(987.7666, 0);
osc.frequency.setValueAtTime(1318.5102, 0.075);
osc.start(0);
osc.stop(2);
osc.connect(amp);
osc.onended = () => {
context.close().then(() => {
process.exit(0);
});
};
amp.gain.setValueAtTime(0.25, 0);
amp.gain.setValueAtTime(0.25, 0.075);
amp.gain.linearRampToValueAtTime(0, 2);
amp.connect(context.destination);
context.pipe(new Speaker());
context.resume();
The online demo is here. In this site, you can compare web-audio-engine
and the native Web Audio API.
$ git clone git@github.com:mohayonao/web-audio-engine.git
$ cd web-audio-engine
$ npm install && npm run build
$ cd demo
$ npm install
$ node demo --help
Simplest play demo with node-speaker
.
$ node demo sines
Rendering and export to the wav file.
$ node demo -o out.wav sines
Currently, this benchmark doesn't work in Chrome or Safari, please use Firefox.
$ git clone git@github.com:mohayonao/web-audio-engine.git
$ cd web-audio-engine
$ npm install && npm run build
$ cd benchmark
$ npm install
$ node .
MIT