In this blog, I show you simple steps to improve your PyTorch model's inference time making them fit to be deployed on the edge.
We will begin from pure PyTorch model from timm to optimized formats like ONNX, OpenVINO, Torchscript, TFlite and use the latest PyTorch advancements using torch.compile
.
https://dicksonneoh.com/portfolio/unlocking_edge_ml_from_pytorch_to_edge_deployment/