Integrate image ML endpoint into VARS
Closed this issue · 11 comments
@kevinsbarnard has spun up a copy of https://gitlab.com/bgwoodward/keras-model-server-fast-api on http://prometheus.shore.mbari.org:8082/ (API). Add integration with VARS.
See also mbari-org/deepsea-ai#20
Adding code to display the bounding boxes on a single annotation (read-only) in the normal image view in VARS. The boxes aren't yet displayed though, it's not clear why yet.
API SPec is at https://mbari.stoplight.io/docs/vaa/bc3d411f074fd-predict. Example response:
{
"success": true,
"predictions": [
{
"category_id": "Microstomus pacificus",
"scores": [
0.874488115310669
],
"bbox": [
419.2205810546875,
329.05548095703125,
527.6322021484375,
484.7042541503906
]
},
{
"category_id": "Asteroidea",
"scores": [
0.5905897617340088
],
"bbox": [
3.2457876205444336,
88.5366439819336,
75.68214416503906,
134.9093475341797
]
}
]
}
TODO Add configuration pane for ML service
Internal Workflow:
sequenceDiagram
actor U as User
participant C as ConfigPane
participant P as Preferences
participant V as VARS
participant M as MLService
participant S as Sharktopoda
participant A as MLStage
Note over U, M: ML Endpoint Configuration
U-)C: Set/Update ML endpoint
U-)C: Close Settings/ConfigPane
C-)M: Close existing MLService (in Data)
C-)P: Save endpoint to local preferences
Note over U, A: Manually trigger prediction
U->>+V: Run Prediction
V->>V: Get ML Service from <Data>
alt no ML service
V->>+P: Look up endpoint
P-->>-V: <endpoint>
alt endpoint does not exist
V-XU: Notify user to set endpoint
else endpoint exists
V->>V: Create new MLService/store in <Data>
end
end
V->>V: Get/Create MLService from Data
V->>+S: Framegrab
S-->>-V: <image>
V-->>+M: Run Prediction
M->>-V: <bounding boxes>
V-)A: Show bounding boxes to user
U-)A: Edit/Approve/Cancel
A-)V: <HandlePredictionsCmd>
A->>A: Close Window
V->>-U: <done>
This is related to mbari-org/vars-feedback#16
Save workflow ...
- Grab image as PNG
- async save to disk and hold on to path
- Convert to JPG.
- async save to disk and hold on to path
- async send to ML server
- On complete actions
- Save annotations
- Convert localizations to annotations.
- Add to a bulk save command and publish on the event bus
- Delete PNG and JPG from local cache
- Clean up Stage and hide it.
- Save image and annotations
- Save png and jpg similar to FrameCaptureCmd
- Convert localizations to annotations.
- Add
image_reference_uuid
to bounding box - Add to a bulk save command and publish on the event bus
- Delete PNG and JPG from local cache
- Clean up Stage and hide it.
- Cancel
- Delete PNG and JPG from local cache
- Clean up Stage and hide it.
- Save annotations
Note
Bounding boxes without animage_reference_uuid
are assumed to belong to the video that their owner observations belong too
Pushed as release 1.4.0-rc1 to video lab
The new endpoint internal at MBARI is http://digits-dev-box-fish.shore.mbari.org:8082/. CVision AI model server is deployed with GPU ID 3 (consumes ~1.7 GB VRAM)
In production use since 1.4.1 release