/ImageClassification-CoreML

The example of running Image Classification using Core ML

Primary LanguageSwiftMIT LicenseMIT

ImageClassification-CoreML

platform-ios swift-version lisence

DEMO-CoreML

Requirements

  • Xcode 9.2+
  • iOS 11.0+, 11.2+, 12.0+
  • Swift 4

Model

Model Size, Minimum iOS Version, Download Link

Model Size
(MB)
Minimum
iOS Version
Download Link
MobileNet 17.1 iOS11 머신 러닝 - 모델 실행 - Apple Developer
MobileNetV2 24.7 iOS11 Machine Learning - Models - Apple Developer
MobileNetV2FP16 12.4 iOS11.2 Machine Learning - Models - Apple Developer
MobileNetV2Int8LUT 6.3 iOS12 Machine Learning - Models - Apple Developer
Resnet50 102.6 iOS11 Machine Learning - Models - Apple Developer
Resnet50FP16 51.3 iOS11.2 Machine Learning - Models - Apple Developer
Resnet50Int8LUT 25.7 iOS12 Machine Learning - Models - Apple Developer
Resnet50Headless 94.4 iOS11 Machine Learning - Models - Apple Developer
SqueezeNet 5 iOS11 Machine Learning - Models - Apple Developer
SqueezeNetFP16 2.5 iOS11.2 Machine Learning - Models - Apple Developer
SqueezeNetInt8LUT 1.3 iOS12 Machine Learning - Models - Apple Developer

Infernece Time (ms)

Infernece Time (ms)

Model vs. Device 12
Pro
12 12
Mini
11
Pro
XS XS
Max
XR X 7+ 7
MobileNet 17 17 14 13 16 18 19 33 43 35
MobileNetV2 15 15 17 14 21 18 21 46 64 53
MobileNetV2FP16 8 17 14 14 20 19 20 48 65 57
MobileNetV2Int8LUT 18 16 16 14 21 21 20 53 64 53
Resnet50 21 18 24 20 27 25 26 61 78 63
Resnet50FP16 19 18 19 20 26 26 27 64 75 74
Resnet50Int8LUT 19 20 20 20 27 25 26 60 77 75
Resnet50Headless 11 11 11 13 18 13 18 36 54 53
SqueezeNet 14 15 17 12 17 17 18 24 35 29
SqueezeNetFP16 13 16 10 13 17 17 18 24 36 29
SqueezeNetInt8LUT 16 17 15 13 18 19 18 27 34 30

Total Time (ms)

Model vs. Device 12
Pro
12 12
Mini
11
Pro
XS XS
Max
XR X 7+ 7
MobileNet 19 18 15 15 18 20 21 35 46 37
MobileNetV2 16 18 19 16 23 21 23 48 67 55
MobileNetV2FP16 8 18 18 15 24 21 23 50 69 60
MobileNetV2Int8LUT 19 18 17 15 23 23 22 55 67 56
Resnet50 22 20 25 22 30 28 29 64 82 66
Resnet50FP16 20 19 20 22 28 28 30 66 78 76
Resnet50Int8LUT 21 21 23 22 29 28 28 63 80 78
Resnet50Headless 11 11 12 14 19 13 18 36 54 54
SqueezeNet 15 16 18 14 18 18 20 25 37 31
SqueezeNetFP16 14 17 11 13 18 18 19 26 38 31
SqueezeNetInt8LUT 18 17 17 14 20 20 19 29 37 32

FPS

Model vs. Device 12
Pro
12 12
Mini
11
Pro
XS XS
Max
XR X 7+ 7
MobileNet 22 24 24 29 23 23 23 23 20 23
MobileNetV2 25 24 24 29 23 23 23 20 13 17
MobileNetV2FP16 12 24 24 29 23 23 23 18 13 15
MobileNetV2Int8LUT 23 23 23 29 23 23 23 16 13 16
Resnet50 23 23 24 29 23 23 23 14 11 14
Resnet50FP16 23 24 24 29 23 23 23 14 11 12
Resnet50Int8LUT 23 24 23 29 23 23 23 15 11 12
Resnet50Headless 21 24 23 29 23 23 23 23 16 17
SqueezeNet 36 24 24 29 23 23 23 23 23 23
SqueezeNetFP16 25 23 24 29 23 23 23 23 22 23
SqueezeNetInt8LUT 22 23 23 29 23 23 23 23 23 23

Build & Run

1. Prerequisites

1.1 Import the Core ML model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

3.1 Import Vision framework

import Vision

3.2 Define properties for Core ML

// MARK - Core ML model
typealias ClassificationModel = MobileNet
var coremlModel: ClassificationModel? = nil

// MARK: - Vision Properties
var request: VNCoreMLRequest?
var visionModel: VNCoreMLModel?

3.3 Configure and prepare the model

override func viewDidLoad() {
    super.viewDidLoad()

	if let visionModel = try? VNCoreMLModel(for: ClassificationModel().model) {
        self.visionModel = visionModel
        request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
        request?.imageCropAndScaleOption = .scaleFill
    } else {
        fatalError()
    }
}

func visionRequestDidComplete(request: VNRequest, error: Error?) {
    /* ------------------------------------------------------ */
    /* something postprocessing what you want after inference */
    /* ------------------------------------------------------ */
}

3.4 Inference 🏃‍♂️

guard let request = request else { fatalError() }
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
try? handler.perform([request])