/ARML

Face detection demo using CoreML and ARKit

Primary LanguageObjective-CMIT LicenseMIT

ARML

This project demonstrates how to identify faces using Apple's new Vision,CoreML and ARKit APIs.The code is written in Objective-C.

Requirements

  • Xcode 9
  • iPhone 6s or newer
  • CoreML model

Tricks

  • Camera Hacking : Since ARkit uses a fixed-lense camera to render the screen, the camera won't be able to auto-focus by itself. To tune the camere you need to get access to the AVCaptureDeviceorAVCaptureSession . However, this is not possible in ARKit as described here . I solved this problem by accessing the availableSenors property of ARSesion in runtime and find the ARImageSensor object which holds the reference to the AVCaptureDevice and AVCaptureSession instance.

  • Machine Learning: To identity different people we need a pre-trained CoreML model. You can use caffe or other neural network infrastructure to train your model. For this demo, I use the Mircorsoft's Custom Vision Serivce which is free and convenient to train images online and you can also download the reslut in CoreML model format.

Acknowledgements

Resources

ScreenShot