Tensorflow Object Detection API
Creating accurate machine learning models capable of localizing and identifying multiple objects in a single image remains a core challenge in computer vision. The TensorFlow Object Detection API is an open source framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models. At Google we’ve certainly found this codebase to be useful for our computer vision needs, and we hope that you will as well.
Contributions to the codebase are welcome and we would love to hear back from you if you find this API useful. Finally if you use the Tensorflow Object Detection API for a research publication, please consider citing:"Speed/accuracy trade-offs for modern convolutional object detectors."
Huang J, Rathod V, Sun C, Zhu M, Korattikara A, Fathi A, Fischer I, Wojna Z,
Song Y, Guadarrama S, Murphy K, CVPR 2017
Maintainers
- Jonathan Huang, github: jch1
- Vivek Rathod, github: tombstone
- Ronny Votel, github: ronnyvotel
- Derek Chow, github: derekjchow
- Chen Sun, github: jesu9
- Menglong Zhu, github: dreamdragon
- Alireza Fathi, github: afathi3
- Zhichao Lu, github: pkulzc
Table of contents
Setup:
Quick Start:
Customizing a Pipeline:
Running:
Extras:
- Tensorflow detection model zoo
-
Exporting a trained model for inference
-
Defining your own model architecture
-
Bringing in your own dataset
-
Supported object detection evaluation protocols
-
Inference and evaluation on the Open Images dataset
-
Run an instance segmentation model
-
Run the evaluation for the Open Images Challenge 2018
-
TPU compatible detection pipelines
-
Running object detection on mobile devices with TensorFlow Lite
Getting Help
To get help with issues you may encounter using the Tensorflow Object Detection API, create a new question on StackOverflow with the tags "tensorflow" and "object-detection".
Please report bugs (actually broken code, not usage questions) to the tensorflow/models GitHub issue tracker, prefixing the issue name with "object_detection".
Please check FAQ for frequently asked questions before reporting an issue.
Release information
Sep 17, 2018
We have released Faster R-CNN detectors with ResNet-50 / ResNet-101 feature extractors trained on the iNaturalist Species Detection Dataset. The models are trained on the training split of the iNaturalist data for 4M iterations, they achieve 55% and 58% mean AP@.5 over 2854 classes respectively. For more details please refer to this paper.
Thanks to contributors: Chen Sun
July 13, 2018
There are many new updates in this release, extending the functionality and capability of the API:
- Moving from slim-based training to Estimator-based training.
- Support for RetinaNet, and a MobileNet adaptation of RetinaNet.
- A novel SSD-based architecture called the Pooling Pyramid Network (PPN).
- Releasing several TPU-compatible models.
These can be found in the
samples/configs/
directory with a comment in the pipeline configuration files indicating TPU compatibility. - Support for quantized training.
- Updated documentation for new binaries, Cloud training, and Tensorflow Lite.
See also our expanded announcement blogpost and accompanying tutorial at the TensorFlow blog.
Thanks to contributors: Sara Robinson, Aakanksha Chowdhery, Derek Chow, Pengchong Jin, Jonathan Huang, Vivek Rathod, Zhichao Lu, Ronny Votel
June 25, 2018
Additional evaluation tools for the Open Images Challenge 2018 are out. Check out our short tutorial on data preparation and running evaluation here!
Thanks to contributors: Alina Kuznetsova
June 5, 2018
We have released the implementation of evaluation metrics for both tracks of the Open Images Challenge 2018 as a part of the Object Detection API - see the evaluation protocols for more details. Additionally, we have released a tool for hierarchical labels expansion for the Open Images Challenge: check out oid_hierarchical_labels_expansion.py.
Thanks to contributors: Alina Kuznetsova, Vittorio Ferrari, Jasper Uijlings
April 30, 2018
We have released a Faster R-CNN detector with ResNet-101 feature extractor trained on AVA v2.1. Compared with other commonly used object detectors, it changes the action classification loss function to per-class Sigmoid loss to handle boxes with multiple labels. The model is trained on the training split of AVA v2.1 for 1.5M iterations, it achieves mean AP of 11.25% over 60 classes on the validation split of AVA v2.1. For more details please refer to this paper.
Thanks to contributors: Chen Sun, David Ross
April 2, 2018
Supercharge your mobile phones with the next generation mobile object detector! We are adding support for MobileNet V2 with SSDLite presented in MobileNetV2: Inverted Residuals and Linear Bottlenecks. This model is 35% faster than Mobilenet V1 SSD on a Google Pixel phone CPU (200ms vs. 270ms) at the same accuracy. Along with the model definition, we are also releasing a model checkpoint trained on the COCO dataset.
Thanks to contributors: Menglong Zhu, Mark Sandler, Zhichao Lu, Vivek Rathod, Jonathan Huang
February 9, 2018
We now support instance segmentation!! In this API update we support a number of instance segmentation models similar to those discussed in the Mask R-CNN paper. For further details refer to our slides from the 2017 Coco + Places Workshop. Refer to the section on Running an Instance Segmentation Model for instructions on how to configure a model that predicts masks in addition to object bounding boxes.
Thanks to contributors: Alireza Fathi, Zhichao Lu, Vivek Rathod, Ronny Votel, Jonathan Huang
November 17, 2017
As a part of the Open Images V3 release we have released:
- An implementation of the Open Images evaluation metric and the protocol.
- Additional tools to separate inference of detection and evaluation (see this tutorial).
- A new detection model trained on the Open Images V2 data release (see Open Images model).
See more information on the Open Images website!
Thanks to contributors: Stefan Popov, Alina Kuznetsova
November 6, 2017
We have re-released faster versions of our (pre-trained) models in the model zoo. In addition to what was available before, we are also adding Faster R-CNN models trained on COCO with Inception V2 and Resnet-50 feature extractors, as well as a Faster R-CNN with Resnet-101 model trained on the KITTI dataset.
Thanks to contributors: Jonathan Huang, Vivek Rathod, Derek Chow, Tal Remez, Chen Sun.
October 31, 2017
We have released a new state-of-the-art model for object detection using the Faster-RCNN with the NASNet-A image featurization. This model achieves mAP of 43.1% on the test-dev validation dataset for COCO, improving on the best available model in the zoo by 6% in terms of absolute mAP.
Thanks to contributors: Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc Le
August 11, 2017
We have released an update to the Android Detect demo which will now run models trained using the Tensorflow Object Detection API on an Android device. By default, it currently runs a frozen SSD w/Mobilenet detector trained on COCO, but we encourage you to try out other detection models!
Thanks to contributors: Jonathan Huang, Andrew Harp
June 15, 2017
In addition to our base Tensorflow detection model definitions, this release includes:
- A selection of trainable detection models, including:
- Single Shot Multibox Detector (SSD) with MobileNet,
- SSD with Inception V2,
- Region-Based Fully Convolutional Networks (R-FCN) with Resnet 101,
- Faster RCNN with Resnet 101,
- Faster RCNN with Inception Resnet v2
- Frozen weights (trained on the COCO dataset) for each of the above models to be used for out-of-the-box inference purposes.
- A Jupyter notebook for performing out-of-the-box inference with one of our released models
- Convenient local training scripts as well as distributed training and evaluation pipelines via Google Cloud.
Thanks to contributors: Jonathan Huang, Vivek Rathod, Derek Chow, Chen Sun, Menglong Zhu, Matthew Tang, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Jasper Uijlings, Viacheslav Kovalevskyi, Kevin Murphy