source: https://images.app.goo.gl/tqCZU7uG7WfWwAt19
- Inference the model and perform post-process yolact model
- Import a photo from device's photo library, inference, and draw the result on the screen
- Capture every frame from device's camera, inference, and draw the result on the screen
- Make it real-time when capture from camera
DEMO reference: https://github.com/Ma-Dan/Yolact-CoreML
Model | Size (MB) | Minimum iOS Version | Input Shape | Output Shape | Download Link | Source Link |
---|---|---|---|---|---|---|
yolact.mlmodel | 146.9 |
iOS11.2 | [1, 550, 550, 3] |
4 MLMultiArrays |
Link | link |
InstanceSegmentation-CoreML
├── InstanceSegmentation-CoreML
| ├── AppDelegate.swift
| ├── Assets.xcassets
| ├── Base.lproj
| ├── Info.plist
| ├── mlmodels
| | └── yalact.mlmodel # you need to download from release of this repository
| └── ViewController.swift
├── InstanceSegmentation-CoreML.xcodeproj
├── README.md
└── LICENSE