/city-surfaces

CitySurfaces semantic segmentation of sidewalk surfaces

Primary LanguagePythonBSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

CitySurfaces: City-scale Semantic Segmentation of Sidewalk Surfaces

CitySurfaces

CitySurfaces is a framework that combines active learning and semantic segmentation to locate, delineate, and classify sidewalk paving materials from street-level images. Our framework adopts a recent high-performing semantic segmentation model (Tao et al., 2020), which uses hierarchical multi-scale attention combined with object-contextual representations

The framework was presented in our paper published at the Sustainable Cities and Society journal (Arxiv link here).

CitySurfaces: City-scale semantic segmentation of sidewalk materials
Maryam Hosseini, Fabio Miranda, Jianzhe Lin, Claudio T. Silva, Sustainable Cities and Society, 2022

@article{HOSSEINI2022103630,
title = {CitySurfaces: City-scale semantic segmentation of sidewalk materials},
journal = {Sustainable Cities and Society},
volume = {79},
pages = {103630},
year = {2022},
issn = {2210-6707},
doi = {https://doi.org/10.1016/j.scs.2021.103630},
url = {https://www.sciencedirect.com/science/article/pii/S2210670721008933},
author = {Maryam Hosseini and Fabio Miranda and Jianzhe Lin and Claudio T. Silva},
keywords = {Sustainable built environment, Surface materials, Urban heat island, Semantic segmentation, Sidewalk assessment, Urban analytics, Computer vision}
}

You can use our pre-trained model to make inference on your own street-level images. Our extended model can classify eight different classes of paving materials:

CitySurfaces paving materials

The team includes:

Table of contents

Updates

New weights from our updated model trained on more cities (now including DC, Chicago, and Philadelphia) is uploaded in our Google Drive.

Installing prerequisites

The framework is based on NVIDIA Semantic Segmentation. The code is tested with pytorch 1.7 and python 3.9. You can use ./Dockerfile to build an image.

Run inference on your own data

Follow the instructions below to be able to segment your own image data. Most of the steps are based on NVIDIA's original steps, with modifications regarding weights and dataset names.

Download Weights

  • Create a directory where you can keep large files.
  > mkdir <large_asset_dir>
  • Update __C.ASSETS_PATH in config.py to point at that directory

    __C.ASSETS_PATH=<large_asset_dir>

  • Download our pretrained weights from Google Drive. Weights should be under <large_asset_dir>/seg_weights.

Running the code

The instructions below make use of a tool called runx, which we find useful to help automate experiment running and summarization. For more information about this tool, please see runx. In general, you can either use the runx-style commandlines shown below. Or you can call python train.py <args ...> directly if you like.

Inference

Update the inference-citysurfaces.yml under scripts directory with the path to your image folder that you would like to make inference on.

Run

> python -m runx.runx scripts/inference-citysurfaces.yml -i

The results should look like the below examples, where you have your input image and segmentation mask, side by side.

CitySurfaces results