A slimmed down version of the MAX Object Detector Web App for use in the MAX tutorial.
This repository's default branch, initial
, contains TODOs meant to be completed while following the
MAX Developer Tutorial.
A working version of the app without the TODOs can be found on the solution
branch.
This is the node version of this Web App, you can also check out the Python version
Start the Model API
Start the Web App
- Get a local copy of the repository
- Install dependencies
- Start the web app server
- Configure ports (Optional)
- Try out the full version (Optional)
NOTE: The set of instructions in this section are a modified version of the ones found on the Object Detector Project Page
To run the docker image, which automatically starts the model serving API, run:
$ docker run -it -p 5000:5000 quay.io/codait/max-object-detector
This will pull a pre-built image from the Quay.io container registry (or use an existing image if already cached locally) and run it. If you'd rather build and run the model locally, or deploy on a Kubernetes cluster, you can follow the steps in the model README.
The API server automatically generates an interactive Swagger documentation page.
Go to http://localhost:5000
to load it. From there you can explore the API and also create test requests.
Use the model/predict
endpoint to load a test image and get predicted labels for the image from the API.
The coordinates of the bounding box are returned in the detection_box
field, and contain the array of normalized
coordinates (ranging from 0 to 1) in the form [ymin, xmin, ymax, xmax]
.
The model assets folder contains a few images you can use to test out the API, or you can use your own.
You can also test it on the command line, for example:
$ curl -F "image=@assets/dog-human.jpg" -XPOST http://localhost:5000/model/predict
You should see a JSON response like that below:
{
"status": "ok",
"predictions": [
{
"label_id": "1",
"label": "person",
"probability": 0.944034993648529,
"detection_box": [
0.1242099404335022,
0.12507188320159912,
0.8423267006874084,
0.5974075794219971
]
},
{
"label_id": "18",
"label": "dog",
"probability": 0.8645511865615845,
"detection_box": [
0.10447660088539124,
0.17799153923988342,
0.8422801494598389,
0.732001781463623
]
}
]
}
You can also control the probability threshold for what objects are returned using the threshold
argument like below:
$ curl -F "image=@assets/dog-human.jpg" -XPOST http://localhost:5000/model/predict?threshold=0.5
The optional threshold
parameter is the minimum probability
value for predicted labels returned by the model.
The default value for threshold
is 0.7
.
Clone the web app repository locally. In a terminal, run the following command:
$ git clone https://github.com/IBM/max-tutorial-app-nodejs.git
Change directory into the repository base folder:
$ cd max-tutorial-app-nodejs
Make sure Node.js and npm are installed then, in a terminal, run the following command:
$ npm install
You then start the web app by running:
$ node app
You can then access the web app at: http://localhost:8090
If you want to use a different port or are running the model API at a different location you can change them with command-line options:
$ node app --port=[new port] --model=[endpoint url including protocol and port]
The latest release of the full web app is deployed with the model API above and is available at http://localhost:5000/app
.
The full version includes more features like filtering the detected objects based on their labels or a threshold for the prediction accuracy