edimuj/cordova-plugin-audioinput

Unable to get audio stream

justinshewell opened this issue · 2 comments

I have been trying for several hours now to get the audio stream from this plugin in order to be able to display the wave form. My issue is very similar to this question (#74), and I put in the code you gave in your answer there. I have also tried the suggestions found in this SO post (https://stackoverflow.com/questions/57579956/how-to-get-audio-stream-from-cordova-plugin-audioinput-for-realtime-visualizer) and nothing works. When I log the data array to the console, it shows every item with '0' or sometimes 'NaN'. I am testing on my Samsung Galaxy S22 and it did ask for permission the first time I ran the app, and it shows the green dot indicating that the microphone is on, but I am not getting any audio and the "waveform" that is generated is a flat line.

Here is my code:

	```
	window.audioinput.getMicrophonePermission(function(hasPermission) {
		if(hasPermission) {
			isRecording = true;
			AudioContext = (window.AudioContext || window.webkitAudioContext);
			audioContent = new AudioContext();
			analyser = audioContent.createAnalyser();
			//analyser.connect(audioContent.destination);
			processor = audioContent.createScriptProcessor(2048, 1, 1);
			
			audioinput.start({ streamToWebAudio: true, audioContext: audioContent });
			var audioInputGainNode = audioContent.createGain();
			audioinput.connect(audioInputGainNode);
			audioInputGainNode.connect(analyser);
			analyser.connect(processor);
			processor.connect(audioContent.destination);
			processor.onaudioprocess = function() {
				var array = new Uint8Array(analyser.frequencyBinCount);
				analyser.getByteFrequencyData(array);
				console.log(array);
			}
			
			//var mediaStream = dest.stream;
			//streamSource = audioinput.getAudioContext().createMediaStreamSource(mediaStream);
			//streamSource.connect(analyser);
			analyser.fftSize = 512;
			frequencyArray = new Float32Array(analyser.fftSize);
			//analyser.onaudioprocess = function() {
				//analyser.getFloatTimeDomainData(frequencyArray);
				//console.log(frequencyArray);
			//};
			
			//var mediaRecorder = new MediaRecorder(mediaStream);
			//window.mediaStream = mediaStream;
			//window.mediaRecorder = mediaRecorder;
			
			//mediaRecorder.start();
			//startTime = performance.now();
	
			bars = [];
			$('#record-chat-message-waveform-inner').html('');
			/*mediaRecorder.ondataavailable = function(e) {
				chunks.push(e.data);
				console.log(chunks);
			}
			
			mediaRecorder.onstop = function() {
				isRecording = false;
				clearInterval(recordingInterval);
				endTime = performance.now();
				var audioBlob = new Blob(chunks, { type: 'audio/webm' });
				currentRecordingBlob = audioBlob;
				chunks = [];
				var recordedAudioURL = URL.createObjectURL(audioBlob);
				currentRecordingURL = recordedAudioURL;
			
				displayAudioTime();
				bars = [];
				drawWaveForm();
			}; */

			recordingInterval = setInterval(function() {
				doRecordingLoop();
			}, timeOffset);
		}
	});
Please help!

I have been digging into this and I still haven't been able to get it to work, but I have some questions I am hoping you can answer as to how your plugin works. The data returned by the "audioinput" event includes a whole bunch of Float32 Arrays, which I use to generate the wave form. There is also an ArrayBuffer. I assumed that this was the actual raw audio data, but no matter what I do, when I convert that ArrayBuffer data to a blob and try to play it, I get an error saying there is no audio data. So, my question is, does your plugin only provide the analyzer data information, or does it grab the raw binary audio data as well? I noticed in Issue #74 you mention having to use a 3rd party library. Can you explain why I need a 3rd party library to capture the raw audio if your plugin is already accessing it? Thank you.

Hi, Justin! Sorry to hear about your issue, sadly I haven't had the time to work with this for some time now.
Do the example app work for you: https://github.com/edimuj/app-audioinput-demo ?

The plugin captures raw audio data. If you want to produce audio files, like webm, the raw data has to be converted to that format. Therefore you'll probably need something like a 3rd party library in order to do that (e.g. https://github.com/higuma/ogg-vorbis-encoder-js or something similar).