/E2E_DL_benchmark

Primary LanguagePythonMIT LicenseMIT

End to end benchmark fro deep learning frameworks

In papers and existing benchmarks, we focused too much on predict/inference stage. However, during production, inference or predict is only one stage of end to end pipeline. This repo tries to build an end to end benchmark for existing deep learning frameworks, i.e., TensorFlow, PyTorch, OpenVINO and analytics-zoo.

End2End in this benchmark:

Image (in memory) -> Pre-processing -> Inference -> Post-processing -> Result (in meory)

Note that: We only collect metrics (latency and throughput) from Pre-processing to Post-processing.

General Benchmark API

def load_model(model_path):
    return model

def preprocessing(image):
    # Pre-processing
    return image

def predict(model, image):
    result = model.predict(image)
    return result

def postprocessing(result):
    # Top-1
    # Top-5

def benchmark(iteration=200):
    # create dummy data or read data from file path
    # preprocessing
    # predict
    # postprocessing

Models

  1. Resnet_50_v1
  2. Inception_V3
  3. MobileNet
  4. FasterRCNN

Usage

python ${benchmark name}.py -m ${model_path} -b ${batch_size} -i ${iteration}

OpenVINO

#install openvino
# source openvino config

OpenVINO Python API.

TensorFlow

pip install tensorflow==1.15.0

Keras (TensorFlow.Keras) and TensorFlow 1.15.X API.

PyTorch

pip install pytorch

PyTorch and Torch Vision API.

Analytics-Zoo

pip install analytics-zoo

Analytics-Zoo Python API.

Reference

  1. OpenVINO
  2. TensorFlow
  3. PyTorch
  4. Analytics-Zoo