vin-ni/Google-Cloud-Speech-Node-Socket-Playground

update to v1p1beta1

Closed this issue ยท 11 comments

Hello,

Great code. Quick question about updating to v1p1beta1 (or is it already there?): how does one do that?
I tried const Speech = require('@google-cloud/speech').v1p1beta1 which worked in the pure node.js solution, but it gives me an error here. Thank you!

Hi,
No, i don't think it is updated. what version is it supposed to be?
I guess you should rather do npm update ?

Should be v1p1beta1 if using the beta, but at least v1. I think this is on v1beta1, but I could be wrong. I have to keep playing with it.

my-MacBook-Pro:src me$ node app.js
Server started on port:1337
Client Connected to server
/Users/ferderml/Google-Cloud-Speech-Node-Socket-Playground/src/app.js:68
        recognizeStream = speech.streamingRecognize(request)
                                 ^

TypeError: Cannot read property 'streamingRecognize' of undefined
    at startRecognitionStream (/Users/me/Google-Cloud-Speech-Node-Socket-Playground/src/app.js:68:34)
    at Socket.<anonymous> (/Users/me/Google-Cloud-Speech-Node-Socket-Playground/src/app.js:53:9)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at /Users/me/Google-Cloud-Speech-Node-Socket-Playground/src/node_modules/socket.io/lib/socket.js:528:12
    at _combinedTickCallback (internal/process/next_tick.js:131:7)
    at process._tickCallback (internal/process/next_tick.js:180:9)

Is the error I get when I do:

// Google Cloud
const Speech = require('@google-cloud/speech');
const speech = Speech().v1p1beta1; // Instantiates a client

Gonna keep trying though!

Yeah, I'm just trying too :)

Ok, I just pushed an update, so it uses gcspeech v 1.5
For the beta version I think you'll somehow need to add the beta version to the package.json.
And then call

const speech = require('speech.v1p1beta1');
var speechClient = new speech.v1p1beta1.SpeechClient({
  // optional auth parameters.
});

instead of

const speech = require('@google-cloud/speech');
const speechClient = new speech.SpeechClient(); // Creates a client

What are you using the beta for?

You're amazing! I want to be able to add the metadata to improve the transcription.

Oh nice!
This doesn't work on 1.5?
Write your solution here, if you found out how to use the beta :)

I am still not certain it's correct, but into app.js at line 106 I added:

const recognitionMetadata = {
    interactionType: 'DICTATION',
    microphoneDistance: 'NEARFIELD',
    originalMediaType: 'AUDIO',
    recordingDeviceType: 'PC',
    audioTopic: 'Animal names', // this is just an example
  };

It runs with no errors, but doesn't work quite as ideally as wanted. Gonna try adding the dependencies from here. If you run nodejs-speech and talk in comparison to your work, it does have a difference. I am so grateful to have found yours, really saved me the headache of setting up the Socket.io, which is super helpful for a novice like me :D

how did it turn out? :)

Hi! Well!! Was able to add in the SpeechContext necessary for my use case. Thank you so much!!

Did the recognitionMetadata work in the end?

Yes it did!