Web Component wrapper to the Web Speech API, that allows you to do voice recognition (speech to text) and speech synthesis (text to speech) using Polymer.
Install the component using Bower:
$ bower install voice-elements --save
Or download as ZIP.
-
Import Web Components' polyfill:
<script src="bower_components/webcomponentsjs/webcomponents.min.js"></script>
-
Import Custom Element:
<link rel="import" href="bower_components/voice-elements/dist/voice-player.html"> <link rel="import" href="bower_components/voice-elements/dist/voice-recognition.html">
-
Start using it!
<voice-player></voice-player> <voice-recognition></voice-recognition>
Provides you a simple DOM API to do speech synthesis (text to speech).
Attribute | Options | Default | Description |
---|---|---|---|
autoplay |
boolean | false |
Specifies if the audio should play when page loads. |
accent |
en-US , en-GB , es-ES , fr-FR , it-IT , de-DE , ja-JP , ko-KR , zh-CN |
en-US |
Specifies the language to be synthesized and spoken. |
text |
string | You are awesome |
Specifies the text to be synthesized and spoken. |
Method | Parameters | Returns | Description |
---|---|---|---|
speak() |
None. | Nothing. | Triggers the voice audio to be played. |
cancel() |
None. | Nothing. | Triggers the voice audio to be canceled. |
pause() |
None. | Nothing. | Triggers the voice audio to be paused. |
resume() |
None. | Nothing. | Triggers the voice audio to be resumed. |
Event | Description |
---|---|
onstart |
Triggers when the voice begun to be spoken. |
onend |
Triggers when the voice completed to be spoken. |
onerror |
Triggers when the voice player detects an error. |
onpause |
Triggers when the voice player is resumed. |
onresume |
Triggers when the voice player is resumed. |
Provides you a simple DOM API to do voice recognition (speech to text).
Attribute | Options | Default | Description |
---|---|---|---|
continuous |
boolean | true |
Specifies if the recognition should continue when the user pauses while speaking. |
text |
string | Returns the recognized text. |
Method | Parameters | Returns | Description |
---|---|---|---|
start() |
None. | Nothing. | Starts the voice recognition. |
stop() |
None. | Nothing. | Requests to recognition service to stop listening to more audio. |
abort() |
None. | Nothing. | Requests to immediately stop listening and stop recognizing. |
Event | Description |
---|---|
onstart |
Triggers when the recognition begins. |
onerror |
Triggers when there's a recognition error. |
onend |
Triggers when the recognition ends. |
onresult |
Triggers when there's a recognition result. |
Unfortunately, the Web Speech API still have a poor support. Check Can I Use for more information.
None ✘ | Latest ✔ | None ✘ | None ✘ | Latest ✔ |
In order to run it locally you'll need to fetch some dependencies and a basic server setup.
-
$ [sudo] npm install -g bower grunt-cli
-
Install local dependencies:
$ bower install && npm install
-
To test your project, start the development server and open
http://localhost:8000
.$ grunt server
-
To build the distribution files before releasing a new version.
$ grunt build
-
To provide a live demo, send everything to
gh-pages
branch.$ grunt deploy
- Fork it!
- Create your feature branch:
git checkout -b my-new-feature
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request :D
For detailed changelog, check Releases.
MIT License © Zeno Rocha