In papers and existing benchmarks, we focused too much on predict/inference stage. However, during production, inference or predict is only one stage of end to end pipeline. This repo tries to build an end to end benchmark for existing deep learning frameworks, i.e., TensorFlow, PyTorch, OpenVINO and analytics-zoo.
End2End in this benchmark:
Image (in memory) -> Pre-processing -> Inference -> Post-processing -> Result (in meory)
Note that: We only collect metrics (latency and throughput) from Pre-processing to Post-processing.
def load_model(model_path):
return model
def preprocessing(image):
# Pre-processing
return image
def predict(model, image):
result = model.predict(image)
return result
def postprocessing(result):
# Top-1
# Top-5
def benchmark(iteration=200):
# create dummy data or read data from file path
# preprocessing
# predict
# postprocessing
- Resnet_50_v1
- Inception_V3
- MobileNet
- FasterRCNN
python ${benchmark name}.py -m ${model_path} -b ${batch_size} -i ${iteration}
#install openvino
# source openvino config
OpenVINO Python API.
pip install tensorflow==1.15.0
Keras (TensorFlow.Keras) and TensorFlow 1.15.X API.
pip install pytorch
PyTorch and Torch Vision API.
pip install analytics-zoo
Analytics-Zoo Python API.