Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
- Build PaddlePaddle for Android
- Build PaddlePaddle for IOS
- Build PaddlePaddle for Raspberry Pi3
- Build PaddlePaddle for PX2
- How to build PaddlePaddle mobile inference library with minimum size.
- Merge batch normalization before deploying the model to the mobile.
- Compress the model before deploying the model to the mobile.
- Merge multiple model parameter files into one file.
- How to deploy int8 model in mobile inference with PaddlePaddle.
- How to use pruning to train smaller model.
- Benchmark of Mobilenet
- Benchmark of ENet
- Benchmark of DepthwiseConvolution in PaddlePaddle