Preprocessing code to generate the training set from SA-1B
Opened this issue · 0 comments
sovit-123 commented
Hello. Great work with FastSAM.
Is it possible that you can open source the preprocessing code that you used for generating the dataset from SA-1B?
I am trying to figure out the dataset structure from the paper, but have a few gaps to fill. I am speculating the following steps:
- SA-1B has annotation for every object in the image and the dataset is in COCO RLE format.
- The bounding boxes for each mask have been generated through preprocessing code from RLE.
- Then the YOLOv8 segmentation model has been trained which can detect each object's bounding box and segmentation mask.
Is this the correct approach or am I missing something?