CASIA-IVA-Lab/FastSAM

Preprocessing code to generate the training set from SA-1B

Opened this issue · 0 comments

Hello. Great work with FastSAM.
Is it possible that you can open source the preprocessing code that you used for generating the dataset from SA-1B?

I am trying to figure out the dataset structure from the paper, but have a few gaps to fill. I am speculating the following steps:

  1. SA-1B has annotation for every object in the image and the dataset is in COCO RLE format.
  2. The bounding boxes for each mask have been generated through preprocessing code from RLE.
  3. Then the YOLOv8 segmentation model has been trained which can detect each object's bounding box and segmentation mask.

Is this the correct approach or am I missing something?