Integration with Google Home = no Windows, win $10,000, go to Google IO
nalbion opened this issue · 5 comments
Would it be possible to integrate Aenea with Google Home? This would unlock Dragonfly on Linux computers.
You could win up to $10,000 and flights, accommodation and entry to Google IO: https://developers.google.com/actions/challenge/
You could use api.ai or the Actions API
The challenges would include:
- OAuth 2.0 flow
- designing Intents & speech patterns which mapped to Dragonfly grammars
@calmofthestorm this was not intended as spam. Do you think that this would even be possible? Would it be more practical to do the integration in Aenea or Dragonfly?
I see. I must admit that mentioning a contest and such do make this look a lot like spam, especially when it's very unclear to me what any of this has to do with Google home. What exactly are you asking is possible? What does Google Home have to do with using linux to code by voice?
"Google Home" is a "smart speaker", similar to Amazon Alexa, which provides an API so that 3rd party developers can add "skills" or "custom actions" to it.
Potentially, you could sit it on your desk, next to your (say, Linux) PC and say:
Okay Google, let's do some pair programming
or
Okay Google, talk to Dragonfly
or
Okay Google, create a new React functional component in Typescript, let's call it "Dragonfly grammar"
The "Okay Google" prefix wakes the device up (alternatively you can tap on the top of the unit) - you can then have an ongoing conversation without the prefix, and have an background process (Aenea or Dragonfly) feed it context and custom "Entities" (slot values)
Ah I see. Yes, you could probably implement a voice coding solution on top Google Home's speech recognition. It might even be possible to implement Home as a backend for Dragonfly, though the Conversation API and such seems pretty different to the point that it might instead make more sense to create your own grammar specification library instead of trying to use Dragonfly. In particular, the ability to progressively refine an ambiguous request is not something Dragonfly would be able to leverage as is.
I'm also curious what problem you're trying to solve -- if you don't want to depend on the fragility of a Windows box running Dragon, why not consider Sphinx or Kaldi? https://github.com/dwks/silvius is trying to do something similar on Kaldi, for example. There's also http://voicecode.io/ if you're looking for something with a higher level of support and polish than I can provide (standard disclaimer: I have not used any of these products and have no direct experience with how good they are). If your concern is relying on propriety software, isn't a web based API and custom hardware device much worse than Dragon/Windows on a VM isolated from the internet?
yeah, you're probably right