hustvl/SparseInst

training details

lsm140 opened this issue · 2 comments

Paper says SparseInst is built on Detectron2 [42] and trained over 8 GPUs with a total of 64 images per mini-batch.
Q1:It use fp16 or not?

Q2: Small or large backbone use the same batchsize per GPU?

Q3: test on the 2080Ti. But the train unclear.Is it on RTX 3090?

Hi,thanks for your interest in SparseInst and we'd like to clarify your questions as follows:

Q1: SparseInst supports both FP32 and FP16. The results and performance reported in our paper are based on FP32.

Q2: All models including larger backbones are trained with the same batch size and schedule.

Q3: We only adopt 2080Ti for all the experiments in our paper. 3090 is only reported for the experiments later.

Dear Author, When I am training my own clothing dataset with SparseInst recently, the mask edges often have wavy lines (the training set labeling does not have this). May I ask what may causes this? In order to find out its cause, I used coco pre-training model to inference on my own data and also found wavy lines also exist. Is this phenomenon inherent to the algorithm?