UniInst: Towards End-to-End Instance Segmentation with Unique Representation UniInst
Name | inf. time | mask AP | download |
---|---|---|---|
UniInst_MS_R_50_3x | 20 FPS | 38.4 | model |
UniInst_MS_R_50_6x | 20 FPS | 38.9 | model |
UniInst_MS_R_101_3x | 16 FPS | 39.7 | model |
UniInst_MS_R_101_6x | 16 FPS | 40.2 | model |
For more models and information, please refer to CondInst README.md.
Note that:
- Inference time for all projects is measured on a NVIDIA V100 with batch size 1.
- APs are evaluated on COCO2017 test split unless specified.
First install Detectron2 following the official guide: INSTALL.md.
Then build UniInst with:
python3 setup.py build develop
Some projects may require special setup, please follow their own README.md
in configs.
- Pick a model and its config file, for example,
UniInst_R_50_3x.yaml
. - Download the model
- Run the demo with
python demo/demo.py \
--config-file configs/UniInst/UniInst_MS_R_50_3x.yaml \
--input input1.jpg input2.jpg \
--opts MODEL.WEIGHTS UniInst_R_50_3x.pth
To train a model with "train_net.py", first setup the corresponding datasets following datasets/README.md, then run:
OMP_NUM_THREADS=1 python3 tools/train_net.py \
--config-file configs/UniInst/UniInst_MS_R_50_3x.yaml \
--num-gpus 8 \
OUTPUT_DIR training_dir/UniInst_R_50_3x
To evaluate the model after training, run:
OMP_NUM_THREADS=1 python3 tools/train_net.py \
--config-file configs/UniInst/UniInst_MS_R_50_3x.yaml \
--eval-only \
--num-gpus 8 \
OUTPUT_DIR training_dir/UniInst_R_50_3x \
MODEL.WEIGHTS training_dir/UniInst_R_50_3x/model_final.pth
Note that:
- The configs are made for 8-GPU training. To train on another number of GPUs, change the
--num-gpus
. - If you want to measure the inference time, please change
--num-gpus
to 1. - We set
OMP_NUM_THREADS=1
by default, which achieves the best speed on our machines, please change it as needed. - This quick start is made for FCOS. If you are using other projects, please check the projects' own
README.md
in configs.
Note that ourwork is based on the AdelaiDet. If you use our code in your reaserch or works, please also cite AdelaiDet.
Please use the following BibTeX entries:
@article{ou2022uniinst,
title={UniInst: Unique representation for end-to-end instance segmentation},
author={Ou, Yimin and Yang, Rui and Ma, Lufan and Liu, Yong and Yan, Jiangpeng and Xu, Shang and Wang, Chengjie and Li, Xiu},
journal={Neurocomputing},
volume={514},
pages={551--562},
year={2022},
publisher={Elsevier}
}
MIT License
Copyright (c) 2021 Yimin Ou
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.