WICG/speech-api

need to improve speech recognition in conversation between multiple speakers

Opened this issue · 0 comments

I use the WebSpeech API demonstration to test for speech recognition in a conversation between more than 1 speakers. As I noted, it seems that when the second speaker start to speak, for example one male and one female, it cannot continuous to recognise. What is the problem and how that can be improved?