About |
⚡️FastDeploy FastDeployis an easy-to-use inference deployment toolbox. It covers the mainstream high-quality pre-trained models in the industry and provides out-of-the-box development experience, including image classification, object detection, image segmentation, face detection, human body key point recognition, text recognition and other multi-tasking, to meet the rapid deployment needs of developers in multiple scenarios, multi-hardware and multi-platform.
- 🔥 2022.6.30 晚20:30,⚡️FastDeploy angel users invite test communication meetings, discuss the pain points of reasoning deployment with developers, and welcome everyone to scan the code to sign up for the conference link.。
- 🔥 2022.6.27 ⚡️FastDeploy v0.1.0Beta release! 🎉
- 💎 Released 40 key models supported SDKs in 8 key hardware and software environments
- 😊 Supports two ways to download and use web pages and pip packages
📦Out-of-the-box inference deployment toolchain, supporting cloud-edge, multi-hardware, and multi-platform deployments
- Click download and PIP install a single line command for the web terminal to quickly download various types of SDK installation packages
- Cloud (including servers, data centers):
- Supports one-line command to start Serving service (including graphical display of web pages)
- Supports one-line command to start image, local video stream, local camera, network video stream prediction
- Supports Window, Linux operating systems
- Support for Python, C++ programming languages
- Edge end:
- Support for edge devices such as NVIDIA Jetson and support for video stream prediction services
- End side (including mobile end)
- Support iOS, Android mobile terminal
- Support for ARM CPU side-side devices
- Supports mainstream hardware
- Support for Intel CPU series (including Core, Xeon, etc.)
- Supports the entire range of ARM CPUs (including Qualcomm, MTK, RK, etc.)
- Supports the full range of NVIDIA GPUs (including V100, T4, Jetson, etc.)
model | Task | Size(MB) | End | Mobile | Mobile | Edge | Server+Cloud | Server+Cloud | Server+Cloud | Server+Cloud |
---|---|---|---|---|---|---|---|---|---|---|
----- | ---- | ----- | Linux | Android | iOS | Linux | Linux | Linux | Windows | Windows |
----- | ---- | --- | ARM CPU | ARM CPU | ARM CPU | Jetson | X86 CPU | GPU | X86 CPU | GPU |
PP-LCNet | Classfication | 11.9 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-LCNetv2 | Classfication | 26.6 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
EfficientNet | Classfication | 31.4 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
GhostNet | Classfication | 20.8 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
MobileNetV1 | Classfication | 17 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
MobileNetV2 | Classfication | 14.2 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
MobileNetV3 | Classfication | 22 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
ShuffleNetV2 | Classfication | 9.2 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
SqueezeNetV1.1 | Classfication | 5 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Inceptionv3 | Classfication | 95.5 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-HGNet | Classfication | 59 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
ResNet50_vd | Classfication | 102.5 | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
SwinTransformer_224_win7 | Classfication | 352.7 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-PicoDet_s_320_coco | Detection | 4.1 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-PicoDet_s_320_lcnet | Detection | 4.9 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
CenterNet | Detection | 4.8 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv3_MobileNetV3 | Detection | 94.6 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-YOLO_tiny_650e_coco | Detection | 4.4 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
SSD_MobileNetV1_300_120e_voc | Detection | 23.3 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOX_Nano_300e_coco | Detection | 3.7 | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-YOLO_ResNet50vd | Detection | 188.5 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-YOLOv2_ResNet50vd | Detection | 218.7 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-YOLO_crn_l_300e_coco | Detection | 209.1 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv5s | Detection | 29.3 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Faster R-CNN_r50_fpn_1x_coco | Detection | 167.2 | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
BlazeFace | Face Detection | 1.5 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
RetinaFace | Face Localisation | 1.7 | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-TinyPose | Keypoint Detection | 5.5 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-LiteSeg(STDC1) | Segmentation | 32.2 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-HumanSeg-Lite | Segmentation | 0.556 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
HRNet-w18 | Segmentation | 38.7 | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ |
Mask R-CNN_r50_fpn_1x_coco | Segmentation | 107.2 | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-HumanSeg-Server | Segmentation | 107.2 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Unet | Segmentation | 53.7 | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Deeplabv3-ResNet50 | Segmentation | 156.5 | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
PP-OCRv1 | OCR | 2.3+4.4 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-OCRv2 | OCR | 2.3+4.4 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-OCRv3 | OCR | 2.4+10.6 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
PP-OCRv3-tiny | OCR | 2.4+10.7 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
- Log In to EasyEdge网页端 to download SDK
Developers can get the latest download links through pip installation fastdeploy-python
-
Environment dependencies
python >= 3.6
-
Installation mode
pip install fastdeploy-python --upgrade
-
使用方式
- Lists all models currently supported by FastDeploy
fastdeploy --list_models
- Download the model's deployment SDK on specific platforms and corresponding hardware, as well as examples
fastdeploy --download_sdk \ --model PP-PicoDet-s_320 \ --platform Linux \ --soc x86 \ --save_dir .
- Parameter description
list_models
: Lists all models currently supported by FastDeploydownload_sdk
: Download the sdk and examples of the model's deployment on specific platforms and corresponding hardwaremodel
: Model name, such as "PP-PicoDet-s_320", can be viewed by viewing all the optionslist_modelsplatform
: Deployment platform with support for Windows/Linux/Android/iOSsoc
: Deploy hardware with x86/x86-NVIDIA-GPU/ARM/Jetson supportsave_dir
: SDK download and save directory
- Linux systems (X86 CPUs, NVIDIA GPUs)
- C++ Inference deployment (with video streaming)
- C++ serviced deployment
- Python Inference deployment
- Python serviced deployment
- Window System (X86 CPU, NVIDIA GPU)
- C++ Inference deployment (with video streaming)
- C++ serviced deployment
- Python Inference deployment
- Python serviced deployment 2 Edge-side deployment
- ArmLinux system (NVIDIA Jetson Nano/TX2/Xavier)
- C++ Inference deployment (with video streaming)
- C++ serviced deployment 3-side deployment
- ArmLinux System (ARM CPU)
- C++ Inference deployment (with video streaming)
- C++ serviced deployment
- Python Inference deployment
- Python serviced deployment 4 Mobile deployment
- iOS system deployment
- Android system deployment 5 Custom model deployment
- Quickly implement personalized model replacement
- Join the community👬: After Scanning the QR code on WeChat, fill in a questionnaire to join the communication group and discuss the pain points of reasoning deployment with developers
本项目中SDK生成和下载使用了EasyEdge中的免费开放能力,再次表示感谢。
FastDeploy follows the Apache-2.0 Open Source License。