UX-Decoder/Semantic-SAM

result is not convincible

sipie800 opened this issue · 1 comments

2024-02-06_180916
This is the predicted mask of the demo swint. I used it to test SAM and it just fails. A damn simple task isn't it? Don't think it produces convinciable result. Can we make the SAM-like model a truely robust one?

I think the first two outputs are reasonable, and our SwinL model gives more robust results. What is the result of original SAM model?