TypeError when training on my own dataset in COCOA-cls format
nosyl opened this issue · 0 comments
I'm trying to download VRSP-Net with my own dataset which is based on COCOA-cls format like this:
python tools/train_net.py --config-file configs/MyOwn-AmodalSegmentation/mask_rcnn_R_50_FPN_1x_parallel_CtRef_VAR_SPRef_SPRet_FM.yaml
However, whle training (evaluating), the process crashed with the following traceback:
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=30.99s).
Accumulating evaluation results...
DONE (t=0.26s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
�[32m[08/22 07:55:35 d2.evaluation.amodal_visible_evaluation]: �[0mEvaluation results for bbox:
| AP | AP50 | AP75 | APs | APm | APl |
|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| -100.000 | -100.000 | -100.000 | -100.000 | -100.000 | -100.000 |
�[32m[08/22 07:55:35 d2.evaluation.amodal_visible_evaluation]: �[0mEvaluation task_name : visible2_segm
Loading and preparing results...
DONE (t=0.56s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
�[32m[08/22 07:55:37 d2.engine.hooks]: �[0mOverall training speed: 9997 iterations in 4:27:01 (1.6026 s / it)
�[32m[08/22 07:55:37 d2.engine.hooks]: �[0mTotal training time: 6:47:26 (2:20:24 on hooks)
Traceback (most recent call last):
File "tools/train_net.py", line 173, in <module>
args=(args,),
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/launch.py", line 51, in launch
main_func(*args)
File "tools/train_net.py", line 161, in main
return trainer.train()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/defaults.py", line 416, in train
super().train(self.start_iter, self.max_iter)
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/train_loop.py", line 133, in train
self.after_step()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/train_loop.py", line 151, in after_step
h.after_step()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/hooks.py", line 325, in after_step
results = self._func()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/defaults.py", line 366, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model)
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/engine/defaults.py", line 600, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/evaluation/evaluator.py", line 192, in inference_on_dataset
results = evaluator.evaluate()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/evaluation/amodal_visible_evaluation.py", line 251, in evaluate
self._eval_predictions(set(self._tasks))
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/evaluation/amodal_visible_evaluation.py", line 421, in _eval_predictions
if len(self._amodal_results) > 0
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/evaluation/amodal_visible_evaluation.py", line 825, in _evaluate_predictions_on_coco
coco_eval.evaluate()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/data/amodal_datasets/pycocotools/cocoeval.py", line 140, in evaluate
self._prepare()
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/data/amodal_datasets/pycocotools/cocoeval.py", line 104, in _prepare
_toMask(gts, self.cocoGt)
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/data/amodal_datasets/pycocotools/cocoeval.py", line 93, in _toMask
rle = coco.annToRLE(ann)
File "/workspace/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/detectron2/data/amodal_datasets/pycocotools/coco.py", line 427, in annToRLE
rles = maskUtils.frPyObjects(segm, h, w) # commented out by nshimada 2022/08/22
File "pycocotools/_mask.pyx", line 293, in pycocotools._mask.frPyObjects
TypeError: Argument 'bb' has incorrect type (expected numpy.ndarray, got list)
I investigated the causes and I found them. They are
- My own dataset (in COCOA-cls format) has "visible_mask" annotations in two-dimentional lists, and some of them are four parts like :
{"visible_mask": [[x1, y1, x2, y2, ...], [x1, y1, x2, y2, ...], [x1, y1, x2, y2, ...], [x1, y1, x2, y2, ...]]}
- According to your implementation: https://github.com/YutingXiao/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/blob/main/detectron2/evaluation/amodal_visible_evaluation.py#L376-L378 , If a "visible_mask" annotation has more than two parts, it is converted into a three dimentional list like: '{"segmentation": [[[x1, y1, x2, y2, ...], [x1, y1, x2, y2, ...], [x1, y1, x2, y2, ...], [x1, y1, x2, y2, ...]]]'
if coco_api_eval_visible.anns[key][visible_name].__len__() > 2:
coco_api_eval_visible.anns[key]["segmentation"] = [coco_api_eval_visible.anns[key][visible_name]]
- According to this issue: cocodataset/cocoapi#139, if
len(annotation["segmentation"][0]) == 4
, TypeError happens.
I think one of solutions is to comment out https://github.com/YutingXiao/Amodal-Segmentation-Based-on-Visible-Region-Segmentation-and-Shape-Prior/blob/main/detectron2/evaluation/amodal_visible_evaluation.py#L377-L378 because we don't need to convert annotations into three dimentional lists.
But is this idea OK? I would like you to tell me whether the idea is OK or not and why you convert "visible_mask" annotations which has more than two parts into a three dimentional lists.