/EdgeNets

This repository contains the source code of our work on designing efficient CNNs for computer vision

Primary LanguagePythonOtherNOASSERTION

Efficient networks for Computer Vision

This repo contains source code of our work on designing efficient networks for different computer vision tasks: (1) Image classification, (2) Object detection, and (3) Semantic segmentation.

Real-time semantic segmentation using ESPNetv2 on iPhone7. See here for iOS application source code using COREML.
Seg demo on iPhone7 Seg demo on iPhone7
Real-time object detection using ESPNetv2
Demo 1
Demo 2 Demo 3

Table of contents

  1. Key highlihgts
  2. Supported networks
  3. Relevant papers
  4. Blogs
  5. Performance comparison
  6. Training receipe
  7. Citation
  8. License
  9. Acknowledgements

Key highlights

  • Object classification on the ImageNet and MS-COCO (multi-label)
  • Semantic Segmentation on the PASCAL VOC and the CityScapes
  • Object Detection on the PASCAL VOC and the MS-COCO
  • Supports PyTorch 1.0
  • Integrated with Tensorboard for easy visualization of training logs.
  • Scripts for downloading different datasets.
  • Semantic segmentation application using ESPNetv2 on iPhone can be found here.

Supported networks

This repo supports following networks:

  • ESPNetv2 (Classification, Segmentation, Detection)
  • DiCENet (Classification, Segmentation, Detection)
  • ShuffleNetv2 (Classification)

Relevant papers

Blogs

Performance comparison

ImageNet

Below figure compares the performance of DiCENet with other efficient networks on the ImageNet dataset. DiCENet outperforms all existing efficient networks, including MobileNetv2 and ShuffleNetv2. More details here

DiCENet performance on the ImageNet

Object detection

Below table compares the performance of our architecture with other detection networks on the MS-COCO dataset. Our network is fast and accurate. More details here

MSCOCO
Image Size FLOPs mIOU FPS
SSD-VGG 512x512 100 B 26.8 19
YOLOv2 544x544 17.5 B 21.6 40
ESPNetv2-SSD (Ours) 512x512 3.2 B 24.54 35

Semantic Segmentation

Below figure compares the performance of ESPNet and ESPNetv2 on two different datasets. Note that ESPNets are one of the first efficient networks that delivers competitive performance to existing networks on the PASCAL VOC dataset, even with low resolution images say 256x256. See here for more details.

Cityscapes PASCAL VOC 2012
Image Size FLOPs mIOU Image Size FLOPs mIOU
ESPNet 1024x512 4.5 B 60.3 512x512 2.2 B 63
ESPNetv2 1024x512 2.7 B 66.2 384x384 0.76 B 68

Training Receipe

Image Classification

Details about training and testing are provided here.

Details about performance of different models are provided here.

Semantic segmentation

Details about training and testing are provided here.

Details about performance of different models are provided here.

Object Detection

Details about training and testing are provided here.

Details about performance of different models are provided here.

Citation

If you find this repository helpful, please feel free to cite our work:

@misc{mehta2019dicenet,
Author = {Sachin Mehta and Hannaneh Hajishirzi and Mohammad Rastegari},
Title = {DiCENet: Dimension-wise Convolutions for Efficient Networks},
Year = {2019},
Eprint = {arXiv:1906.03516},
}

@inproceedings{mehta2018espnetv2,
  title={ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network},
  author={Mehta, Sachin and Rastegari, Mohammad and Shapiro, Linda and Hajishirzi, Hannaneh},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  year={2019}
}

@inproceedings{mehta2018espnet,
  title={Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation},
  author={Mehta, Sachin and Rastegari, Mohammad and Caspi, Anat and Shapiro, Linda and Hajishirzi, Hannaneh},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={552--568},
  year={2018}
}

License

By downloading this software, you acknowledge that you agree to the terms and conditions given here.

Acknowledgements

Most of our object detection code is adapted from SSD in pytorch. We thank authors for such an amazing work.