/PoseEstimation-CoreML

The example of running Pose Estimation using Core ML

Primary LanguageSwiftMIT LicenseMIT

PoseEstimation-CoreML

This project is Pose Estimation on iOS with Core ML.
If you are interested in iOS + Machine Learning, visit here you can see various DEMOs.

Jointed Keypoints Concatenated heatmap
poseestimation-demo-joint.gif poseestimation-demo-heatmap.gif

한국어 README

How it works

how_it_works

Video source: https://www.youtube.com/watch?v=EM16LBKBEgI

Requirements

  • Xcode 9.2+
  • iOS 11.0+
  • Swift 4.1

Download model

Pose Estimation model for Core ML(model_cpm.mlmodel)
☞ Download Core ML model model_cpm.mlmodel or hourglass.mlmodel.

input_name_shape_dict = {"image:0":[1,224,224,3]} image_input_names=["image:0"]
output_feature_names = ['Convolutional_Pose_Machine/stage_5_out:0']

-in https://github.com/edvardHua/PoseEstimationForMobile

cpm hourglass
Input shape [1, 192, 192, 3] [1, 192, 192, 3]
Output shape [1, 96, 96, 14] [1, 48, 48, 14]
Input node name image image
Output node name Convolutional_Pose_Machine/stage_5_out hourglass_out_3
Inference time on iPhone X 57 mm 33 mm

Build & Run

1. Prerequisites

1.1 Import pose estimation model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

(Ready to publish)

See also