app_audiofork lets you integrate raw audio streams in your third party app by making minor adjustments to your asterisk dialplan.
the asterisk app works as a small "fork" between your dialplan and app logic.
ASTERISK -> AUDIO STREAM -> WS APP SERVER
the main purpose of this app is to quickly offload audio streams to another script or app – allowing implementors to add higher levels of audio processing to their dialplan.
this is not officially a builtin asterisk module so you will have to drop files into the asterisk codebase. please use the following steps to install the module:
- copy "app_audiofork.c" to "asterisk/apps/app_audiofork.c"
- cd into your asterisk source tree
- refresh the menuselect options
rm -f ./menuselect.makeopts
- re run menuselect
make menuselect
- app_audiofork should be listed under "Applications" and selected by default.
- install asterisk
make
make install
- reload asterisk
asterisk -rx 'core reload'
here is a simple example of how to use "AudioFork()"
exten => _X.,1,Answer()
exten => _X.,n,Verbose(starting audio fork)
exten => _X.,n,AudioFork(ws://localhost:8080/)
exten => _X.,n,Verbose(audio fork was started continuing call..)
exten => _X.,n,Playback(hello-world)
exten => _X.,n,Hangup()
you will need to use a websocket server that supports receiving binary frames. below is one written in node.js that was also used during testing:
below is an example that receives audio frames from "AudioFork()" and stores them into a file.
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });
var fs = require('fs');
var wstream = fs.createWriteStream('audio.raw');
wss.on('connection', function connection(ws) {
console.log("got connection ");
ws.on('message', function incoming(message) {
console.log('received frame..');
wstream.write(message);
});
});
below is an example using sox to convert audio received into a format like WAV.
sox -r 8000 -e signed-integer -b 16 audio.raw audio.wav
below is an example of a dialplan that can send 2 separate streams to a websocket server. in this example the basic dialplan and the websocket server has been modified to accept separate URL paths so that we can save to two separate files depending on which direction of the call we are processing.
updated dialplan
[main-out]
exten => _.,1,Verbose(call was placed..)
same => n,Answer()
same => n,AudioFork(ws://localhost:8080/out,D(out))
same => n,Dial(SIP/1001,60,gM(in))
same => n,Hangup()
[macro-in]
exten => _.,1,Verbose(macro-in called)
same => n,AudioFork(ws://localhost:8080/in,D(out))
node.js server implementation
const http = require('http');
const WebSocket = require('ws');
const url = require('url');
const fs = require('fs');
const server = http.createServer();
const wss1 = new WebSocket.Server({ noServer: true });
const wss2 = new WebSocket.Server({ noServer: true });
var outstream = fs.createWriteStream('out.raw');
var instream = fs.createWriteStream('in.raw');
wss1.on('connection', function connection(ws) {
// ...
console.log("got out connection ");
ws.on('message', function incoming(message) {
console.log('received out frame..');
outstream.write(message);
});
});
wss2.on('connection', function connection(ws) {
// ...
console.log("got in connection ");
ws.on('message', function incoming(message) {
console.log('received in frame..');
instream.write(message);
});
});
server.on('upgrade', function upgrade(request, socket, head) {
const pathname = url.parse(request.url).pathname;
if (pathname === '/out') {
wss1.handleUpgrade(request, socket, head, function done(ws) {
wss1.emit('connection', ws, request);
});
} else if (pathname === '/in') {
wss2.handleUpgrade(request, socket, head, function done(ws) {
wss2.emit('connection', ws, request);
});
} else {
socket.destroy();
}
});
server.listen(8080);
for a demo integration with Google Cloud speech APIs, please see: Asterisk Transcribe Demo
AudioFork() currently supports secure websocket connections. in order to create a secure websocket connection, you must specify the "T" option in the "AudioFork()" app options.
for example:
AudioFork(wss://example.org/in,D(out)T(on))
below is a list of updates planned for the module:
- add asterisk manager support
- stopping live AudioForks thru AMI
- starting new AudioFork based on channel prefix
- applying volume gain to AudioFork
- muting AudioFork
- store responses pushed from websocket server into channel var
for any queries / more info please contact me directly:
Nadir Hamid <matrix.nad@gmail.com>
thank you