/Accelerate-Deep-Learning-Inferences

In this repository, you will receive information (notebooks) of TensorFlow model optimization using TensorRT and TF-Lite. You will be able to: Understand the fundamentals of optimization using TF-TRT and TF-Lite, deploy deep learning models by reduced precision (FP32, FP16 and INT8) on the inference stage and calibrate the weights

Primary LanguageJupyter NotebookMIT LicenseMIT

Stargazers

No one’s star this repository yet.