/openseg.pytorch

The official Pytorch implementation of OCNet, OCRNet, and SegFix.

Primary LanguagePythonMIT LicenseMIT

openseg.pytorch

PWC

PWC

PWC

PWC

PWC

News

  • 2022/08/07 HDETR is a general and effective scheme to improve DETRs for various fundamental vision tasks. H-Deformable-DETR (strong results on COCO object detection) H-PETR-3D (strong results on nuScenes) H-PETR-Pose (strong results on COCO pose estimation)

  • 2022/03/09 RankSeg is a more effective formulation of general segmentation problem and improves various SOTA segmentation methods across multiple benchmarks.

  • 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details.

  • 2021/08/13 We have released the implementation for HRFormer and the combination of HRFormer and OCR achieves better semantic segmentation performance.

  • 2021/03/12 The late ACCPET is finally here, our "OCNet: Object context network for scene parsing" has been accepted by IJCV-2021, which consists of two of our previous technical reports: OCNet and ISA. Congratulations to all the co-authors!

  • 2021/02/16 Support pytorch-1.7, mixed-precision, and distributed training. Based on the PaddleClas ImageNet pretrained weights, we achieve 83.22% on Cityscapes val, 59.62% on PASCAL-Context val (new SOTA), 45.20% on COCO-Stuff val (new SOTA), 58.21% on LIP val and 47.98% on ADE20K val. Please checkout branch pytorch-1.7 for more details.

  • 2020/12/07 PaddleSeg has supported our ISA and HRNet + OCR. Jittor also has supported our ResNet-101 + OCR.

  • 2020/08/16 MMSegmentation has supported our HRNet + OCR.

  • 2020/07/20 The researchers from AInnovation have achieved Rank#1 on ADE20K Leaderboard via training our HRNet + OCR with a semi-supervised learning scheme. More details are in their Technical Report.

  • 2020/07/09 OCR (Spotlight) and SegFix have been accepted by the ECCV-2020. Notably, the reseachers from Nvidia set a new state-of-the-art performance on Cityscapes leaderboard: 85.4% via combining our HRNet + OCR with a new hierarchical mult-scale attention scheme.

  • 2020/05/11 We have released the checkpoints/logs of "HRNet + OCR" on all the 5 benchmarks including Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff in the Model Zoo. Please feel free to try our method on your own dataset.

  • 2020/04/18 We have released some of our checkpoints/logs of OCNet, ISA, OCR and SegFix. We highly recommend you to use our SegFix to improve your segmentation results as it is super easy & fast to use.

  • 2020/03/12 Our SegFix could be used to improve the performance of various SOTA methods on both semantic segmentation and instance segmentation, e.g., "PolyTransform + SegFix" achieves Rank#2 on Cityscapes leaderboard (instance segmentation track) with performance as 41.2%.

  • 2020/01/13 The source code for reproduced HRNet+OCR has been made public.

  • 2020/01/09 "HRNet + OCR + SegFix" achieves Rank#1 on Cityscapes leaderboard with mIoU as 84.5%.

  • 2019/09/25 We have released the paper OCR, which is method of our Rank#2 entry to the leaderboard of Cityscapes.

  • 2019/07/31 We have released the paper ISA, which is very easy to use and implement while being much more efficient than OCNet or DANet based on conventional self-attention.

  • 2019/07/23 We (HRNet + OCR w/ ASP) achieve Rank#1 on the leaderboard of Cityscapes (with a single model) on 3 of 4 metrics.

  • 2019/05/27 We achieve SOTA on 6 different semantic segmentation benchmarks including: Cityscapes, ADE20K, LIP, Pascal-Context, Pascal-VOC, COCO-Stuff. We provide the source code for our approach on all the six benchmarks.

Model Zoo and Baselines

We provide a set of baseline results and trained models available for download in the Model Zoo.

Introduction

This is the official code of OCR, OCNet, ISA and SegFix. OCR, OCNet, and ISA focus on better context aggregation mechanisms (in the semantic segmentation task) and ISA focuses on addressing the boundary errors (in both semantic segmentation and instance segmentation tasks). We highlight the overall framework of OCR and SegFix in the figures as shown below:

OCR

Fig.1 - Illustrating the pipeline of OCR. (i) form the soft object regions in the pink dashed box. (ii) estimate the object region representations in the purple dashed box. (iii) compute the object contextual representations and the augmented representations in the orange dashed box.

SegFix

Fig.2 - Illustrating the SegFix framework: In the training stage, we first send the input image into a backbone to predict a feature map. Then we apply a boundary branch to predict a binary boundary map and a direction branch to predict a direction map and mask it with the binary boundary map. We apply boundary loss and direction loss on the predicted boundary map and direction map separately. In the testing stage, we first convert the direction map to offset map and then refine the segmentation results of any existing methods according to the offset map.

Citation

Please consider citing our work if you find it helps you,

@article{YuanW18,
  title={Ocnet: Object context network for scene parsing},
  author={Yuhui Yuan and Jingdong Wang},
  journal={arXiv preprint arXiv:1809.00916},
  year={2018}
}

@article{HuangYGZCW19,
  title={Interlaced Sparse Self-Attention for Semantic Segmentation},
  author={Lang Huang and Yuhui Yuan and Jianyuan Guo and Chao Zhang and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:1907.12273},
  year={2019}
}

@article{YuanCW20,
  title={Object-Contextual Representations for Semantic Segmentation},
  author={Yuhui Yuan and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:1909.11065},
  year={2020}
}

@article{YuanXCW20,
  title={SegFix: Model-Agnostic Boundary Refinement for Segmentation},
  author={Yuhui Yuan and Jingyi Xie and Xilin Chen and Jingdong Wang},
  journal={arXiv preprint arXiv:2007.04269},
  year={2020}
}

@article{YuanFHZCW21,
  title={HRT: High-Resolution Transformer for Dense Prediction},
  author={Yuhui Yuan and Rao Fu and Lang Huang and Weihong Lin and Chao Zhang and Xilin Chen and Jingdong Wang},
  booktitle={arXiv preprint arXiv:2110.09408},
  year={2021}
}

Acknowledgment

This project is developed based on the segbox.pytorch and the author of segbox.pytorch donnyyou retains all the copyright of the reproduced Deeplabv3, PSPNet related code.