/Watson-Speech

This collection demonstrates how to help you to quickly embed Watson Speech in your own applications.

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

Self-Serve Assets for Embeddable AI using Watson Speech

Assets/Accelerators for Watson Speech (this repo) contains self-serve notebooks and documentation on how to create Speech models using Watson Speech library, how to serve Watson Speech models, and how to make inference requests from custom applications. With an IBM Cloud account a full production sample can be deployed in roughly one hour.

Key Technologies:

  • IBM Watson Speech to Text Library for Embed transcribes written text from spoken audio. The service leverages machine learning to combine knowledge of grammar, language structure, and the composition of audio and voice signals to accurately transcribe the human voice. It continuously updates and refines its transcription as it receives more speech audio. The service is ideal for applications that need to extract high-quality speech transcripts for use cases such as call centers, custom care, agent assistance, and similar solutions. You can customize the Watson Text to Speech service to suit your language and application needs. Both services offer HTTP and WebSocket programming interfaces that make them suitable for any application that produces or accepts audio.

  • IBM Watson Text to Speech Library for Embed synthesizes natural-sounding speech from written text. The service streams the results back to the client with minimal delay. The service is appropriate for voice-driven and screenless applications, where audio is the preferred method of output. You can customize the Watson Text to Speech service to suit your language and application needs. Both services offer HTTP and WebSocket programming interfaces that make them suitable for any application that produces or accepts audio.

Outline

Resources