Speech recognition module for react native using Vosk library
npm install -S react-native-vosk
Vosk uses prebuilt models to perform speech recognition offline. You have to download the model(s) that you need on Vosk official website Avoid using too heavy models, because the computation time required to load them into your app could lead to bad user experience. Then, unzip the model in your app folder. If you just need to use the iOS version, put the model folder wherever you want, and import it as described below. If you need both iOS and Android to work, you can avoid to copy the model twice for both projects by importing the model from the Android assets folder in XCode. Just do as follow:
In Android Studio, open the project manager, right-click on your project folder and New > Folder > Assets folder.
Then put the model folder inside the assets folder created. In your file tree it should be located in android\app\src\main\assets. So, if you downloaded the french model named model-fr-fr, you should access the model by going to android\app\src\main\assets\model-fr-fr. In Android studio, your project structure should be like that:
You can import as many models as you want.
In XCode, right-click on your project folder, and click on "Add files to [your project name]".
Then navigate to your model folder. You can navigate to your Android assets folder as mentionned before, and chose your model here. It will avoid to have the model copied twice in your project. If you don't use the Android build, you can just put the model wherever you want, and select it.
That's all. The model folder should appear in your project. When you click on it, your project target should be checked (see below).
import Vosk from 'react-native-vosk';
// ...
const voiceRecognition = new Vosk();
voiceRecognition.loadModel('model-en-en').then(() => {
// we can use promise...
const options = ['left', 'right', '[unk]'];
voiceRecognition
.start(options)
.then((res: string) => {
console.log('Result is: ' + res);
})
.catch((e: any) => {
console.log('Error: ' + e);
})
.finally(() => {
console.log("Recognition is complete")
});
// ... or events
const resultEvent = voiceRecognition.onResult((res) => {
console.log('A onResult event has been caught: ' + res.data);
});
// Don't forget to call resultEvent.remove(); to delete the listener
}).catch(e => {
console.error(e);
})
Note that start()
method will ask for audio record permission.
Method | Argument | Return | Description |
---|---|---|---|
loadModel |
path: string |
Promise |
Loads the voice model used for recognition, it is required before using start method |
start |
grammar: string[] or none |
Promise |
Starts the voice recognition and returns the recognized text as a promised string, you can recognize specific words using the grammar argument (ex: ["left", "right"]) according to kaldi's documentation |
stop |
none |
none |
Stops the recognition |
Method | Promise return | Description |
---|---|---|
onResult |
The recognized word as a string |
Triggers on voice recognition result |
onFinalResult |
The recognized word as a string |
Triggers if stopped using stop() method |
onError |
The error that occured as a string or exception |
Triggers if an error occured |
onTimeout |
"timeout" string |
Triggers on timeout |
const resultEvent = voiceRecognition.onResult((res) => {
console.log('A onResult event has been caught: ' + res.data);
});
resultEvent.remove();
Don't forget to remove the event listener once you don't need it anymore.
See the contributing guide to learn how to contribute to the repository and the development workflow.
MIT