Node.js client library to use the Watson APIs.
Table of Contents
- You need an IBM Cloud account.
npm install watson-developer-cloud
The examples folder has basic and advanced examples. The examples within each service assume that you already have service credentials.
Credentials are checked for in the following order:
-
Hard-coded or programatic credentials passed to the service constructor
-
Environment variables:
SERVICE_NAME_USERNAME
andSERVICE_NAME_PASSWORD
environment properties (orSERVICE_NAME_API_KEY
when appropriate) and, optionally,SERVICE_NAME_URL
- If using IAM:
SERVICE_NAME_IAM_APIKEY
and optionallySERVICE_NAME_IAM_URL
, orSERVICE_NAME_IAM_ACCESS_TOKEN
- IBM-Cloud-supplied credentials (via the
VCAP_SERVICES
JSON-encoded environment property)
If you run your app in IBM Cloud, the SDK gets credentials from the VCAP_SERVICES
environment variable.
See the examples/
folder for Browserify and Webpack client-side SDK examples (with server-side generation of auth tokens.)
Note: not all services currently support CORS, and therefore not all services can be used client-side. Of those that do, most require an auth token to be generated server-side via the Authorization Service.
Watson services are migrating to token-based Identity and Access Management (IAM) authentication.
- With some service instances, you authenticate to the API by using IAM.
- In other instances, you authenticate by providing the username and password for the service instance.
- Visual Recognition uses a form of API key only with instances created before May 23, 2018. Newer instances of Visual Recognition use IAM.
To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:
- Go to the IBM Cloud Dashboard page.
- Either click an existing Watson service instance or click Create resource > AI and create a service instance.
- Click Show to view your service credentials.
- Copy the
url
and eitherapikey
orusername
andpassword
.
Some services use token-based Identity and Access Management (IAM) authentication. IAM authentication uses a service API key to get an access token that is passed with the call. Access tokens are valid for approximately one hour and must be regenerated.
You supply either an IAM service API key or an access token:
- Use the API key to have the SDK manage the lifecycle of the access token. The SDK requests an access token, ensures that the access token is valid, and refreshes it if necessary.
- Use the access token if you want to manage the lifecycle yourself. For details, see Authenticating with IAM tokens. If you want to switch to API key, override your stored IAM credentials with an IAM API key.
// in the constructor, letting the SDK manage the IAM token
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>',
iam_apikey: '<iam_api_key>',
iam_url: '<iam_url>', // optional - the default value is https://iam.bluemix.net/identity/token
});
// in the constructor, assuming control of managing IAM token
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>',
iam_access_token: '<access-token>'
});
// after instantiation, assuming control of managing IAM token
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>'
});
discovery.setAccessToken('<access-token>')
var DiscoveryV1 = require('watson-developer-cloud/discovery/v1');
var discovery = new DiscoveryV1({
version: '{version}',
username: '{username}',
password: '{password}'
});
Important: This type of authentication works only with Visual Recognition instances created before May 23, 2018. Newer instances of Visual Recognition use IAM.
var VisualRecognitionV3 = require('watson-developer-cloud/visual-recognition/v3');
var visualRecognition = new VisualRecognitionV3({
version: '{version}',
api_key: '{api_key}'
});
Custom headers can be passed with any request. Each method has an optional parameter headers
which can be used to pass in these custom headers, which can override headers that we use as parameters.
For example, this is how you can pass in custom headers to Watson Assistant service. In this example, the 'custom'
value for 'Accept-Language'
will override the default header for 'Accept-Language'
, and the 'Custom-Header'
while not overriding the default headers, will additionally be sent with the request.
var assistant = new watson.AssistantV1({
/* username, password, version, url, etc... */
});
assistant.message({
workspace_id: 'something',
input: {'text': 'Hello'},
headers: {
'Custom-Header': 'custom',
'Accept-Language': 'custom'
}
}, function(err, result, response) {
if (err)
console.log('error:', err);
else
console.log(JSON.stringify(result, null, 2));
});
To retrieve the HTTP response, all methods can be called with a callback function with three parameters, with the third being the response. Users for example may retrieve the response headers with this usage pattern.
Here is an example of how to access the response headers for Watson Assistant:
var assistant = new watson.AssistantV1({
/* username, password, version, url, etc... */
});
assistant.message(params, function(err, result, response) {
if (err)
console.log('error:', err);
else
console.log(response.headers);
});
By default, all requests are logged. This can be disabled of by setting the X-Watson-Learning-Opt-Out
header when creating the service instance:
var myInstance = new watson.WhateverServiceV1({
/* username, password, version, url, etc... */
headers: {
"X-Watson-Learning-Opt-Out": true
}
});
You can find links to the documentation at https://console.bluemix.net/developer/watson/documentation. Find the service that you're interested in, click API reference, and then select the Node tab.
There are also auto-generated JSDocs available at http://watson-developer-cloud.github.io/node-sdk/master/
If you are having difficulties using the APIs or have a question about the Watson services, please ask a question at dW Answers or Stack Overflow.
The Authorization service can generate auth tokens for situations where providing the service username/password is undesirable.
Tokens are valid for 1 hour and may be sent using the X-Watson-Authorization-Token
header or the watson-token
query param.
Note that the token is supplied URL-encoded, and will not be accepted if it is double-encoded in a querystring.
NOTE: Authenticating with the
X-Watson-Authorization-Token
header or thewatson-token
query param is now deprecated. The token continues to work with Cloud Foundry services, but is not supported for services that use Identity and Access Management (IAM) authentication. For details see Authenticating with IAM tokens or the README in the IBM Watson SDK you use. The Authorization SDK now supports returning IAM Access Tokens when instantiated with an IAM API key.
var watson = require('watson-developer-cloud');
// to get an IAM Access Token
var authorization = new watson.AuthorizationV1({
iam_apikey: '<Service API key>',
iam_url: '<IAM endpoint URL - OPTIONAL>',
});
authorization.getToken(function (err, token) {
if (!token) {
console.log('error:', err);
} else {
// Use your token here
}
});
// to get a Watson Token - NOW DEPRECATED
var authorization = new watson.AuthorizationV1({
username: '<Text to Speech username>',
password: '<Text to Speech password>',
url: 'https://stream.watsonplatform.net/authorization/api', // Speech tokens
});
authorization.getToken({
url: 'https://stream.watsonplatform.net/text-to-speech/api'
},
function (err, token) {
if (!token) {
console.log('error:', err);
} else {
// Use your token here
}
});
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
var AssistantV1 = require('watson-developer-cloud/assistant/v1');
var assistant = new AssistantV1({
username: '<username>',
password: '<password>',
url: 'https://gateway.watsonplatform.net/assistant/api/',
version: '2018-02-16'
});
assistant.message(
{
input: { text: "What's the weather?" },
workspace_id: '<workspace id>'
},
function(err, response) {
if (err) {
console.error(err);
} else {
console.log(JSON.stringify(response, null, 2));
}
}
);
This service has been renamed to Assistant.
Use the Discovery Service to search and analyze structured and unstructured data.
var DiscoveryV1 = require('watson-developer-cloud/discovery/v1');
var discovery = new DiscoveryV1({
username: '<username>',
password: '<password>',
url: 'https://gateway.watsonplatform.net/discovery/api/',
version: '2017-09-01'
});
discovery.query(
{
environment_id: '<environment_id>',
collection_id: '<collection_id>',
query: 'my_query'
},
function(err, response) {
if (err) {
console.error(err);
} else {
console.log(JSON.stringify(response, null, 2));
}
}
);
Translate text from one language to another or idenfity a language using the Language Translator service.
var LanguageTranslatorV3 = require('watson-developer-cloud/language-translator/v3');
var languageTranslator = new LanguageTranslatorV3({
username: '<username>',
password: '<password>',
url: 'https://gateway.watsonplatform.net/language-translator/api/',
version: 'YYYY-MM-DD',
});
languageTranslator.translate(
{
text: 'A sentence must have a verb',
source: 'en',
target: 'es'
},
function(err, translation) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(translation, null, 2));
}
);
languageTranslator.identify(
{
text:
'The language translator service takes text input and identifies the language used.'
},
function(err, language) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(language, null, 2));
}
}
);
Note: Language Translator v3 is now available. The v2 Language Translator API will no longer be available after July 31, 2018. To take advantage of the latest service enhancements, migrate to the v3 API. View the Migrating to Language Translator v3 page for more information.
Translate text from one language to another or idenfity a language using the Language Translator service.
var LanguageTranslatorV2 = require('watson-developer-cloud/language-translator/v2');
var languageTranslator = new LanguageTranslatorV2({
username: '<username>',
password: '<password>',
url: 'https://gateway.watsonplatform.net/language-translator/api/'
});
languageTranslator.translate(
{
text: 'A sentence must have a verb',
source: 'en',
target: 'es'
},
function(err, translation) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(translation, null, 2));
}
);
languageTranslator.identify(
{
text:
'The language translator service takes text input and identifies the language used.'
},
function(err, language) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(language, null, 2));
}
}
);
Use Natural Language Classifier service to create a classifier instance by providing a set of representative strings and a set of one or more correct classes for each as training. Then use the trained classifier to classify your new question for best matching answers or to retrieve next actions for your application.
var NaturalLanguageClassifierV1 = require('watson-developer-cloud/natural-language-classifier/v1');
var classifier = new NaturalLanguageClassifierV1({
username: '<username>',
password: '<password>',
url: 'https://gateway.watsonplatform.net/natural-language-classifier/api/'
});
classifier.classify(
{
text: 'Is it sunny?',
classifier_id: '<classifier-id>'
},
function(err, response) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(response, null, 2));
}
}
);
See this example to learn how to create a classifier.
Use Natural Language Understanding is a collection of natural language processing APIs that help you understand sentiment, keywords, entities, high-level concepts and more.
var fs = require('fs');
var NaturalLanguageUnderstandingV1 = require('watson-developer-cloud/natural-language-understanding/v1.js');
var nlu = new NaturalLanguageUnderstandingV1({
username: '<username>',
password: '<password>',
version: '2018-04-05',
url: 'https://gateway.watsonplatform.net/natural-language-understanding/api/'
});
nlu.analyze(
{
html: file_data, // Buffer or String
features: {
concepts: {},
keywords: {}
}
},
function(err, response) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(response, null, 2));
}
}
);
Analyze text in English and get a personality profile by using the Personality Insights service.
var PersonalityInsightsV3 = require('watson-developer-cloud/personality-insights/v3');
var personalityInsights = new PersonalityInsightsV3({
username: '<username>',
password: '<password>',
version: '2016-10-19',
url: 'https://gateway.watsonplatform.net/personality-insights/api/'
});
personalityInsights.profile(
{
content: 'Enter more than 100 unique words here...',
content_type: 'text/plain',
consumption_preferences: true
},
function(err, response) {
if (err) {
console.log('error:', err);
} else {
console.log(JSON.stringify(response, null, 2));
}
}
);
Use the Speech to Text service to recognize the text from a .wav
file.
var SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1');
var fs = require('fs');
var speechToText = new SpeechToTextV1({
username: '<username>',
password: '<password>',
url: 'https://stream.watsonplatform.net/speech-to-text/api/'
});
var params = {
// From file
audio: fs.createReadStream('./resources/speech.wav'),
content_type: 'audio/l16; rate=44100'
};
speechToText.recognize(params, function(err, res) {
if (err)
console.log(err);
else
console.log(JSON.stringify(res, null, 2));
});
// or streaming
fs.createReadStream('./resources/speech.wav')
.pipe(speechToText.recognizeUsingWebSocket({ content_type: 'audio/l16; rate=44100' }))
.pipe(fs.createWriteStream('./transcription.txt'));
Use the Text to Speech service to synthesize text into a .wav file.
var TextToSpeechV1 = require('watson-developer-cloud/text-to-speech/v1');
var fs = require('fs');
var textToSpeech = new TextToSpeechV1({
username: '<username>',
password: '<password>',
url: 'https://stream.watsonplatform.net/text-to-speech/api/'
});
var params = {
text: 'Hello from IBM Watson',
voice: 'en-US_AllisonVoice', // Optional voice
accept: 'audio/wav'
};
// Synthesize speech, correct the wav header, then save to disk
// (wav header requires a file length, but this is unknown until after the header is already generated and sent)
textToSpeech
.synthesize(params, function(err, audio) {
if (err) {
console.log(err);
return;
}
textToSpeech.repairWavHeader(audio);
fs.writeFileSync('audio.wav', audio);
console.log('audio.wav written with a corrected wav header');
});
Use the Tone Analyzer service to analyze the emotion, writing and social tones of a text.
var ToneAnalyzerV3 = require('watson-developer-cloud/tone-analyzer/v3');
var toneAnalyzer = new ToneAnalyzerV3({
username: '<username>',
password: '<password>',
version: '2016-05-19',
url: 'https://gateway.watsonplatform.net/tone-analyzer/api/'
});
toneAnalyzer.tone(
{
tone_input: 'Greetings from Watson Developer Cloud!',
content_type: 'text/plain'
},
function(err, tone) {
if (err) {
console.log(err);
} else {
console.log(JSON.stringify(tone, null, 2));
}
}
);
Use the Visual Recognition service to recognize the following picture.
var VisualRecognitionV3 = require('watson-developer-cloud/visual-recognition/v3');
var fs = require('fs');
var visualRecognition = new VisualRecognitionV3({
url: '<service_url>',
version: '2018-03-19',
iam_apikey: '<iam_api_key>',
});
var params = {
images_file: fs.createReadStream('./resources/car.png')
};
visualRecognition.classify(params, function(err, res) {
if (err) {
console.log(err);
} else {
console.log(JSON.stringify(res, null, 2));
}
});
Sample code for integrating Tone Analyzer and Conversation is provided in the examples directory.
By default, the library tries to use Basic Auth and will ask for api_key
or username
and password
and send an Authorization: Basic XXXXXXX
. You can avoid this by using:
use_unauthenticated
.
var watson = require('watson-developer-cloud');
var assistant = new watson.AssistantV1({
use_unauthenticated: true
});
This library relies on the request
npm module writted by
request to call the Watson Services. To debug the apps, add
'request' to the NODE_DEBUG
environment variable:
$ NODE_DEBUG='request' node app.js
where app.js
is your Node.js file.
Running all the tests:
$ npm test
Running a specific test:
$ mocha -g '<test name>'
Find more open source projects on the IBM Github Page.
This library is licensed under Apache 2.0. Full license text is available in COPYING.
See CONTRIBUTING.