Visual_Recognition_API_Examples

Most of my learning is from the Watson Developer Cloud portal

https://www.ibm.com/watson/developercloud/visual-recognition/api/v3/#introduction

The first thing is to train the Watson library with the images, to be used for reference later. The library is a zip file that you create, having images of the object/person/location in different angles/shots.

This is called creating a classifier. Once this is done - then check if the training is done by issuing a REST call and validating if Watson has completed its processing.

Later, feed in the reference image and let the Watson Visual Recognition Engine check if the image exists in the library and return a score. Higher the score -higher the likeliness that this will match the one in the library.

Look at readme file in this repo - for the starter scripts.

Some snapshot when I was experimenting with the tools.

screen shot 2017-01-06 at 11 48 06 am

Create the library ( classifier) by stating zip files, as Positive and negative images.

The word(s) "positive images", and "negative images" are keywords !! - this indicates to the VR engine how to score when the reference image is given.

screen shot 2017-01-06 at 11 59 39 am

Listing classifiers. Since I have already created the library you will see the road_signs_159xxx listed.

screen shot 2017-01-06 at 12 00 24 pm

This error is when you already have a classifier ( library ) created and try to create it again.

screen shot 2017-01-06 at 12 01 06 pm

Snapshot - after we have created the classifier ( library ) - The engine takes time to digest the data.

Run the REST API to check if the VR Engine is ready to start referencing ( or interpret ) the image we are about to feed.

The status should say "ready", before we can start experimenting with our images. ( should take 2-3 minutes )

screen shot 2017-01-06 at 12 02 30 pm

Feed in a Right direction image and check that the VR engine can detect it and give me a positive score.

A score of higher value indicates that the new image I just passed is likeliness of the images in the positive zip files.

Likewise a lower score indicates that the image is not resembling any images in the positive images zip file.

This is not about the exact match - but likeliness of the image being matched in the library ( zip ) file.

The myparams.json - can have another classifier called "default" - i.e the VR Engine's default library,

One key concept is inclusion of the threshold argument when making the REST API.

Understanding the contents of myparams.json is important when classifying ( interpreting ) the images.

The importance of "threshold" is well documented by the VR technicians in this article.

( we have to be selective about including the "default" classifier, in the myparams.json )

https://developer.ibm.com/answers/questions/327301/classify-against-default-class-and-custom-class-in/

screen shot 2017-01-06 at 12 10 36 pm

Example of a lower score - this happened because I tried to feed in a left side image and tried to validate with the VR engine's library that I have already configured.

screen shot 2017-01-06 at 12 24 19 pm

This is the right image that I used to check if it resembled any images in the right_images.zip library.

screen shot 2017-01-06 at 12 27 27 pm

Left image that I used to check that if it matched any image in the positive ( or right images ) zip file.

screen shot 2017-01-06 at 12 27 08 pm