WICG/speech-api

Support SpeechRecognition input from audio files and Float32Array and ArrayBuffer

guest271314 opened this issue · 3 comments

Support .wav, .webm, .ogg, .mp3 files (file types supported by the implementation decoders) and Float32Array and ArrayBuffer input to SpeechRecognition.

Use cases for static audio file and ArrayBuffer (non-"real-time") input to SpeechRecognition, includem but are not limited to:

  • TTS to audio file, audio file to SST, audio output to TTS (document reader to audio output)
  • Research, development, testing and analysis of speech recognition technologies in general and the accuacy of the application itself
  • Editing and modifying existing static audio files pre-SpeechRecognition input to achieve expected text output

AudioWorkletNode can be used to stream Float32Array input.

Related #66

There are already several means of getting from audio files and buffers to audio MediaStreamTracks. Most of your example use cases are solvable by #66 and #69.

The only thing this proposal would solve compared to those proposals is that it could process audio faster than real-time, i.e., faster than it'd take to play them out.

Personally I think that particular problem is better solved by integrating with something like WebCodecs if/when it becomes mature and available.

@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases.

Internally the STT engine, unless specifically designed for MediaStreamTrack input, would need to convert the audio stream to one of the representations of the file listed at this issue, in general, a WAV file.

It is not clear how either #66 or #69 solve the use cases in this issue without converting a file or buffer to MediaStreamTrack instead of simply using the file or buffer as input?

@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases.

WebCodecs has not settled yet so I cannot say, but it's the only ongoing effort I'm aware of that would allow processing media data in non-realtime and be passed around. There's OfflineAudioContext, but it doesn't really pipe into things. With WebCodecs it sounds like you'd get a ReadableStream of DecodedAudioPacket, which could be an input to SpeechRecognition, for instance.

Internally the STT engine, unless specifically designed for MediaStreamTrack input, would need to convert the audio stream to one of the representations of the file listed at this issue, in general, a WAV file.

To analyze any audio data you have to decode it first so that seems reasonable. When UAs ship with STT engines that are local, it wouldn't make sense to hand them an encoded file.

It is not clear how either #66 or #69 solve the use cases in this issue without converting a file or buffer to MediaStreamTrack instead of simply using the file or buffer as input?

Of course they'd solve it by decoding the file or buffer into a MediaStreamTrack. A fine solution as different tools are good at different things.