/PoseEstimation-CoreML

The example of running Pose Estimation using Core ML

Primary LanguageSwiftMIT LicenseMIT

PoseEstimation-CoreML

platform-ios swift-version lisence

This project is Pose Estimation on iOS with Core ML.
If you are interested in iOS + Machine Learning, visit here you can see various DEMOs.

한국어 README

Jointed Keypoints Hatmaps Still Image

Video source:

Pose Capturing Pose Matching

Features

  • Estimate body pose on a image
  • Inference with camera's pixel buffer on real-time
  • Inference with a photo library's image
  • Visualize as heatmaps
  • Visualize as lines and points
  • Pose capturing and pose matching

How it works

how_it_works

Requirements

  • Xcode 9.2+
  • iOS 11.0+
  • Swift 4

Model

Get PoseEstimationForMobile's model

Download this temporary models from following link.

Or

☞ Download Core ML model model_cpm.mlmodel or hourglass.mlmodel.

input_name_shape_dict = {"image:0":[1,192,192,3]} image_input_names=["image:0"]
output_feature_names = ['Convolutional_Pose_Machine/stage_5_out:0']

-in https://github.com/tucan9389/pose-estimation-for-mobile

Model Size, Minimum iOS Version, Input/Output Shape

Model Size
(MB)
Minimum
iOS Version
Input Shape Output Shape
cpm 2.6 iOS11 [1, 192, 192, 3] [1, 96, 96, 14]
hourhglass 2 iOS11 [1, 192, 192, 3] [1, 48, 48, 14]

Infernece Time (ms)

Model vs. Device 11
Pro
XS
Max
XR X 8 8+ 7 7+ 6S+ 6+
cpm 5 27 27 32 31 31 39 37 44 115
hourhglass 3 6 7 29 31 32 37 42 48 94

Total Time (ms)

Model vs. Device 11
Pro
XS
Max
XR X 8 8+ 7 7+ 6S+ 6+
cpm 23 39 40 46 47 45 55 58 56 139
hourhglass 23 15 15 38 40 40 48 55 58 106

FPS

Model vs. Device 11
Pro
XS
Max
XR X 8 8+ 7 7+ 6S+ 6+
cpm 15 23 23 20 20 21 17 16 16 6
hourhglass 15 23 23 24 23 23 19 16 15 8

Get your own model

Or you can use your own PoseEstimation model

Build & Run

1. Prerequisites

1.1 Import pose estimation model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

3.1 Import Vision framework

import Vision

3.2 Define properties for Core ML

// properties on ViewController
typealias EstimationModel = model_cpm // model name(model_cpm) must be equal with mlmodel file name
var request: VNCoreMLRequest!
var visionModel: VNCoreMLModel!

3.3 Configure and prepare the model

override func viewDidLoad() {
    super.viewDidLoad()

    visionModel = try? VNCoreMLModel(for: EstimationModel().model)
	request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
	request.imageCropAndScaleOption = .scaleFill
}

func visionRequestDidComplete(request: VNRequest, error: Error?) {
    /* ------------------------------------------------------ */
    /* something postprocessing what you want after inference */
    /* ------------------------------------------------------ */
}

3.4 Inference 🏃‍♂️

// on the inference point
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
try? handler.perform([request])

Performance Test

1. Import the model

You can download cpm or hourglass model for Core ML from tucan9389/pose-estimation-for-mobile repo.

2. Fix the model name on PoseEstimation_CoreMLTests.swift

3. Run the test

Hit the ⌘ + U or click the Build for Testing icon.

See also