speech_to_text_api_v30

SpeechToTextApiV30 - JavaScript client for speech_to_text_api_v30 Speech to Text API v3.0. This SDK is automatically generated by the Swagger Codegen project:

  • API version: v3.0
  • Package version: v3.0
  • Build package: io.swagger.codegen.languages.JavascriptClientCodegen

Installation

npm

To publish the library as a npm, please follow the procedure in "Publishing npm packages".

Then install it via:

npm install speech_to_text_api_v30 --save
Local development

To use the library locally without publishing to a remote npm registry, first install the dependencies by changing into the directory containing package.json (and this README). Let's call this JAVASCRIPT_CLIENT_DIR. Then run:

npm install

Next, link it globally in npm with the following, also from JAVASCRIPT_CLIENT_DIR:

npm link

Finally, switch to the directory you want to use your speech_to_text_api_v30 from, and run:

npm link /path/to/<JAVASCRIPT_CLIENT_DIR>

You should now be able to require('speech_to_text_api_v30') in javascript files from the directory you ran the last command above from.

git

If the library is hosted at a git repository, e.g. https://github.com/YOUR_USERNAME/speech_to_text_api_v30 then install it via:

    npm install YOUR_USERNAME/speech_to_text_api_v30 --save

For browser

The library also works in the browser environment via npm and browserify. After following the above steps with Node.js and installing browserify with npm install -g browserify, perform the following (assuming main.js is your entry file, that's to say your javascript file where you actually use this library):

browserify main.js > bundle.js

Then include bundle.js in the HTML pages.

Webpack Configuration

Using Webpack you may encounter the following error: "Module not found: Error: Cannot resolve module", most certainly you should disable AMD loader. Add/merge the following section to your webpack config:

module: {
  rules: [
    {
      parser: {
        amd: false
      }
    }
  ]
}

Getting Started

Please follow the installation instruction and execute the following JS code:

var SpeechToTextApiV30 = require('speech_to_text_api_v30');

var defaultClient = SpeechToTextApiV30.ApiClient.instance;

// Configure API key authorization: apiKeyHeader
var apiKeyHeader = defaultClient.authentications['apiKeyHeader'];
apiKeyHeader.apiKey = "YOUR API KEY"
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//apiKeyHeader.apiKeyPrefix['Ocp-Apim-Subscription-Key'] = "Token"

// Configure API key authorization: apiKeyQuery
var apiKeyQuery = defaultClient.authentications['apiKeyQuery'];
apiKeyQuery.apiKey = "YOUR API KEY"
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//apiKeyQuery.apiKeyPrefix['subscription-key'] = "Token"

var api = new SpeechToTextApiV30.DefaultApi()

var id = "id_example"; // {String} Format - uuid. The identifier of the model that will be copied.

var opts = { 
  'modelCopy': new SpeechToTextApiV30.ModelCopy() // {ModelCopy} The body contains the subscription key of the target subscription.
};

var callback = function(error, data, response) {
  if (error) {
    console.error(error);
  } else {
    console.log('API called successfully. Returned data: ' + data);
  }
};
api.copyModelToSubscription(id, opts, callback);

Documentation for API Endpoints

All URIs are relative to https://westus.api.cognitive.microsoft.com/speechtotext/v3.0

Class Method HTTP request Description
SpeechToTextApiV30.DefaultApi copyModelToSubscription POST /models/{id}/copyto Copy Model
SpeechToTextApiV30.DefaultApi createDataset POST /datasets Create Dataset
SpeechToTextApiV30.DefaultApi createEndpoint POST /endpoints Create Endpoint
SpeechToTextApiV30.DefaultApi createEvaluation POST /evaluations Create Evaluation
SpeechToTextApiV30.DefaultApi createHook POST /webhooks Create Web Hook
SpeechToTextApiV30.DefaultApi createModel POST /models Create Model
SpeechToTextApiV30.DefaultApi createProject POST /projects Create Project
SpeechToTextApiV30.DefaultApi createTranscription POST /transcriptions Create Transcription
SpeechToTextApiV30.DefaultApi deleteBaseModelLog DELETE /endpoints/base/{locale}/files/logs/{logId} Delete Base Model Endpoint Log
SpeechToTextApiV30.DefaultApi deleteBaseModelLogs DELETE /endpoints/base/{locale}/files/logs Delete All Base Model Endpoint Logs
SpeechToTextApiV30.DefaultApi deleteDataset DELETE /datasets/{id} Delete Dataset
SpeechToTextApiV30.DefaultApi deleteEndpoint DELETE /endpoints/{id} Delete Endpoint
SpeechToTextApiV30.DefaultApi deleteEndpointLog DELETE /endpoints/{id}/files/logs/{logId} Delete Custom Model Endpoint Log
SpeechToTextApiV30.DefaultApi deleteEndpointLogs DELETE /endpoints/{id}/files/logs Delete All Custom Model Endpoint Logs
SpeechToTextApiV30.DefaultApi deleteEvaluation DELETE /evaluations/{id} Delete Evaluation
SpeechToTextApiV30.DefaultApi deleteHook DELETE /webhooks/{id} Delete Web Hook
SpeechToTextApiV30.DefaultApi deleteModel DELETE /models/{id} Delete Model
SpeechToTextApiV30.DefaultApi deleteProject DELETE /projects/{id} Delete Project
SpeechToTextApiV30.DefaultApi deleteTranscription DELETE /transcriptions/{id} Delete Transcription
SpeechToTextApiV30.DefaultApi getBaseModel GET /models/base/{id} Get Base Model
SpeechToTextApiV30.DefaultApi getBaseModelLog GET /endpoints/base/{locale}/files/logs/{logId} Get Base Model Endpoint Log
SpeechToTextApiV30.DefaultApi getBaseModelLogs GET /endpoints/base/{locale}/files/logs Get Base Model Endpoint Logs
SpeechToTextApiV30.DefaultApi getBaseModelManifest GET /models/base/{id}/manifest Get Base Model Manifest
SpeechToTextApiV30.DefaultApi getBaseModels GET /models/base Get Base Models
SpeechToTextApiV30.DefaultApi getDataset GET /datasets/{id} Get Dataset
SpeechToTextApiV30.DefaultApi getDatasetFile GET /datasets/{id}/files/{fileId} Get Dataset File
SpeechToTextApiV30.DefaultApi getDatasetFiles GET /datasets/{id}/files Get Dataset Files
SpeechToTextApiV30.DefaultApi getDatasets GET /datasets Get Datasets
SpeechToTextApiV30.DefaultApi getDatasetsForProject GET /projects/{id}/datasets Get Datasets for Project
SpeechToTextApiV30.DefaultApi getEndpoint GET /endpoints/{id} Get Endpoint
SpeechToTextApiV30.DefaultApi getEndpointLog GET /endpoints/{id}/files/logs/{logId} Get Custom Model Endpoint Log
SpeechToTextApiV30.DefaultApi getEndpointLogs GET /endpoints/{id}/files/logs Get Custom Model Endpoint Logs
SpeechToTextApiV30.DefaultApi getEndpoints GET /endpoints Get Endpoints
SpeechToTextApiV30.DefaultApi getEndpointsForProject GET /projects/{id}/endpoints Get Endpoints for Project
SpeechToTextApiV30.DefaultApi getEvaluation GET /evaluations/{id} Get Evaluation
SpeechToTextApiV30.DefaultApi getEvaluationFile GET /evaluations/{id}/files/{fileId} Get Evaluation File
SpeechToTextApiV30.DefaultApi getEvaluationFiles GET /evaluations/{id}/files Get Evaluation Files
SpeechToTextApiV30.DefaultApi getEvaluations GET /evaluations Get Evaluations
SpeechToTextApiV30.DefaultApi getEvaluationsForProject GET /projects/{id}/evaluations Get Evaluations for Project
SpeechToTextApiV30.DefaultApi getHealthStatus GET /healthstatus Get Health Status
SpeechToTextApiV30.DefaultApi getHook GET /webhooks/{id} Get Web Hook
SpeechToTextApiV30.DefaultApi getHooks GET /webhooks Get Web Hooks
SpeechToTextApiV30.DefaultApi getModel GET /models/{id} Get Model
SpeechToTextApiV30.DefaultApi getModelManifest GET /models/{id}/manifest Get Custom Model Manifest
SpeechToTextApiV30.DefaultApi getModels GET /models Get Custom Models
SpeechToTextApiV30.DefaultApi getModelsForProject GET /projects/{id}/models Get Models for Project
SpeechToTextApiV30.DefaultApi getProject GET /projects/{id} Get Project
SpeechToTextApiV30.DefaultApi getProjects GET /projects Get Projects
SpeechToTextApiV30.DefaultApi getSupportedLocalesForDatasets GET /datasets/locales Get Supported Locales for Datasets
SpeechToTextApiV30.DefaultApi getSupportedLocalesForEndpoints GET /endpoints/locales Get Supported Locales for Endpoints
SpeechToTextApiV30.DefaultApi getSupportedLocalesForEvaluations GET /evaluations/locales Get Supported Locales for Evaluations
SpeechToTextApiV30.DefaultApi getSupportedLocalesForModels GET /models/locales Get Supported Locales for Models
SpeechToTextApiV30.DefaultApi getSupportedLocalesForTranscriptions GET /transcriptions/locales Get Supported Locales for Transcriptions
SpeechToTextApiV30.DefaultApi getSupportedProjectLocales GET /projects/locales Get Supported Locales for Projects
SpeechToTextApiV30.DefaultApi getTranscription GET /transcriptions/{id} Get Transcription
SpeechToTextApiV30.DefaultApi getTranscriptionFile GET /transcriptions/{id}/files/{fileId} Get Transcription File
SpeechToTextApiV30.DefaultApi getTranscriptionFiles GET /transcriptions/{id}/files Get Transcription Files
SpeechToTextApiV30.DefaultApi getTranscriptions GET /transcriptions Get Transcriptions
SpeechToTextApiV30.DefaultApi getTranscriptionsForProject GET /projects/{id}/transcriptions Get Transcriptions for Project
SpeechToTextApiV30.DefaultApi pingHook POST /webhooks/{id}/ping Ping Web Hook
SpeechToTextApiV30.DefaultApi testHook POST /webhooks/{id}/test Test Web Hook
SpeechToTextApiV30.DefaultApi updateDataset PATCH /datasets/{id} Update Dataset
SpeechToTextApiV30.DefaultApi updateEndpoint PATCH /endpoints/{id} Update Endpoint
SpeechToTextApiV30.DefaultApi updateEvaluation PATCH /evaluations/{id} Update Evaluation
SpeechToTextApiV30.DefaultApi updateHook PATCH /webhooks/{id} Update Web Hook
SpeechToTextApiV30.DefaultApi updateModel PATCH /models/{id} Update Model
SpeechToTextApiV30.DefaultApi updateProject PATCH /projects/{id} Update Project
SpeechToTextApiV30.DefaultApi updateTranscription PATCH /transcriptions/{id} Update Transcription
SpeechToTextApiV30.DefaultApi uploadDatasetFromForm POST /datasets/upload Create Dataset from Form

Documentation for Models

Documentation for Authorization

apiKeyHeader

  • Type: API key
  • API key parameter name: Ocp-Apim-Subscription-Key
  • Location: HTTP header

apiKeyQuery

  • Type: API key
  • API key parameter name: subscription-key
  • Location: URL query string