cnr-isti-vclab/TagLab

✨ Feature Request: Integrating Meta's Segment Anything Model (SAM)

Closed this issue · 4 comments

Hello, I was curious on your thoughts on integrating Meta's new SAM, in addition / replacement the to the Deep Extreme Cut algorithm, which does a good job segmenting coral in an orthomosaic (considering it was never trained on the data). Like the current tools in TagLab that assists in creating polygons / segmentation masks for coral, SAM allows users to input prompts (points, bounding boxes) to create annotations for objects (things) in the image. Below is a screenshot of using the bounding box tool to segment out some coral.

orthomosaic

Hi Jordan,

We are testing it and we are thinking the best way to integrate it (or a part) in TagLab.

All the best

@maxcorsini I'm looking at adding this as a feature in my branch; I just wanted to check that y'all aren't actively working on integrating this feature so I don't waste time by duplicating work.

Cheers.

Never mind, I added to my fork, will submit a PR after finished testing :)

Dear Jordan,

We are sorry to be not responsive about this point. WE have a new version of TagLab with new features, and also SAM incorporated, but we consider this version not ready yet, because sometime there are issue in performance (e.g. inversion of the segmentation and other problems). Additionally, it is quite complicated to have a full integration at interface level, I mean, we would like to change the interface to have it more uniform with the many possibilities of interaction that SAM has (pos neg clicks, fully automatic segmentation, bounding box with corrections, and so on).

We are really interested in discussing this with you. I will send you a separate email about this.

Best regards,
Massimiliano