/OmniCity-v1.0

The models, datasets(satellite&street view) and correlative config files of OmniCity-v1.0 project.

Primary LanguagePythonMIT LicenseMIT

OmniCity-v1.0

Introduction

This project is used in Multi-level task based on MMDetection, which includes models and correlative configs.
Note: If you want to check more about our work, please refer here.

Overview of the models and its related tasks

Tasks: Instance segmentation, Land use segmentation, Plane segmentation.

Satellite level: small view

Data Type Size Method Task Download
Satallite image 512*512 Mask R-CNN Instance segmentation Model&log
Satallite image 512*512 MS R-CNN Instance segmentation Model&log
Satallite image 512*512 CARAFE Instance segmentation Model&log
Satallite image 512*512 Cascade Instance segmentation Model&log
Satallite image 512*512 HTC Instance segmentation Model&log

Satellite level: medium view

Data Type Size Method Task Download
Satallite image 512*512 Mask R-CNN Instance segmentation Model&log

Satellite level: large view

Data Type Size Method Task Download
Satellite image 512*512 Msak R-CNN Instance segmentation Model&log

Street level: Panorama

Data Type Size Method Task Download
Panorama image 512*1024 Mask R-CNN Instance segmentation Model&log
Panorama image 512*1024 Mask R-CNN Land use segmentation Model&log
Panorama image 512*1024 MS R-CNN Land use segmentation Model&log
Panorama image 512*1024 CARAFE Land use segmentation Model&log
Panorama image 512*1024 Cascade Land use segmentation Model&log
Panorama image 512*1024 HTC Land use segmentation Model&log

Street level: Mono-view

Data Type Size Method Task Download
Mono-view image 512*512 Mask R-CNN Instance segmentation Model&log
Mono-view image 512*512 Mask R-CNN Land use segmentation Model&log
Mono-view image 512*512 Mask R-CNN Plane segmentation Model&log

Usage

Data preparation

The OmniCity dataset can be downloaded from https://opendatalab.com/OmniCity.

If you want to use your own dataset test the models above, please prepare data following MMdetection(Dataset in COCO format is preferred). And the data structure should look like below:

mmdetection
├── data
│   ├── coco
│   │   ├── annotations
│   │   ├── train2017
│   │   ├── val2017
│   │   ├── test2017

Model test

With OmniCity dataset:

# single-gpu testing
python tools/test.py \
    ${CONFIG_FILE} \
    ${CHECKPOINT_FILE} \
    [--out ${RESULT_FILE}] \
    [--eval ${EVAL_METRICS}] \
    [--show]

# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
    ${CONFIG_FILE} \
    ${CHECKPOINT_FILE} \
    [--out ${RESULT_FILE}] \
    [--eval ${EVAL_METRICS}] \
    [--show]

# multi-gpu testing
bash tools/dist_test.sh \
    ${CONFIG_FILE} \
    ${CHECKPOINT_FILE} \
    ${GPU_NUM} \
    [--out ${RESULT_FILE}] \
    [--eval ${EVAL_METRICS}]

With new dataset:

  • Prepare the dataset following the above rules
  • Refer to the preceding operations