A version in node to play around with! I've abstracted it from needing to use the mic and speakers on the device running the code (but it still can!) so that you can pass audio in and play audio back however you want to.
You need to create a JSON file for OAuth2 permissions! Follow the instructions and then:
$ npm install google-assistant
const GoogleAssistant = require('google-assistant');
const config = {
auth: {
keyFilePath: 'YOUR_API_KEY_FILE_PATH.json',
// where you want the tokens to be saved
// will create the directory if not already there
savedTokensPath: 'SOME_PATH/tokens.js',
},
};
const assistant = new GoogleAssistant(config);
// starts a new conversation with the assistant
const startConversation = (conversation) => {
// setup the conversation and send data to it
// for a full example, see `examples/mic-speaker.js`
conversation
.on('audio-data', (data) => {
// do stuff with the audio data from the server
// usually send it to some audio output / file
})
.on('end-of-utterance', () => {
// do stuff when done speaking to the assistant
// usually just stop your audio input
})
.on('transcription', (text) => {
// do stuff with the text you said to the assistant
})
.on('ended', (error, continueConversation) => {
// once the conversation is ended, see if we need to follow up
if (error) console.log('Conversation Ended Error:', error);
else if (continueConversation) assistant.start();
else console.log('Conversation Complete');
})
.on('error', error => console.error(error));
};
// will start a conversation and wait for audio data
// as soon as it's ready
assistant
.on('ready', () => assistant.start())
.on('started', startConversation);
- mic-speaker - If you want to test input and output using your machine’s built-in hardware.
- console-input - If you want to use the console to type in commands instead of saying them (thanks to CTKRocks for the help on this)
If you are on macOS and are seeing Illegal instruction: 4
when you complete your conversation, just use this command to re-install the speaker:
$ npm install speaker --mpg123-backend=openal
Here are the events and methods on the main instance.
Emitted once your OAuth2 credentials have been saved. It's safe to start a conversation now.
You'll get this right after a call to start
and it returns a conversation
instance (see below).
This is called anytime after you've got a ready
event.
After a call to start
you will get one of these back. Here are the events and methods that it supports:
If things go funky, this will be called.
Contains an audio buffer to use to pipe to a file or speaker.
Emitted once the server detects you are done speaking.
Contains the text that the server recognized from your voice.
After a call to end()
this will be emitted with an error and a boolean that will be true
if you need to continue the conversation. This is basically your cue to call start()
again.
This is only emitted when using IFTTT.
Send this when you are finsished playing back the assistant's response.