dvlab-research/PanopticFCN

Training panoptic segmentation for a custom dataset in coco-panoptic format

Closed this issue · 4 comments

  1. Could you please tell if register_coco_panoptic function in https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/coco_panoptic.py can be used to register a custom dataset in coco-panoptic format? I can generate the following files for the training and validation of custom dataset.
    image_root (str): directory which contains all the images
    panoptic_root (str): directory which contains panoptic annotation images
    panoptic_json (str): path to the json panoptic annotation file
    instances_json (str): path to the json instance annotation file.

  2. Do we need any other files to train for custom dataset?

Of course, you can register a custom dataset in coco-panoptic format. If you do not convert the panoptic annotation in each iteration, maybe you need an extra panoptic_stuff_root (or so-called sem_seg_root): directory which contains all convert stuff annotations. A converting example can be found in detectron2, which convert coco panoptic format to the designed stuff label.

Since your project uses the PanopticFPN data format, should I be using the register_coco_panoptic_separated function in https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/coco_panoptic.py as it also the registers sem_seg_root (directory which contains all the ground truth segmentation annotations.) that you mentioned?. I
am asking because the detectron2 project has not yet provided details about registering custom datasets for panpotic segmentation.

Yes, we utilize a similar training strategy with that in PanopticFPN, which converts all the things categories to one stuff class for sem seg. So, maybe you should need register_coco_panoptic_separated in your own project.

Okay, thanks