TensorFlow Lite Object Detection iOS Example Application
iOS Versions Supported: iOS 12.0 and above. Xcode Version Required: 10.0 and above
Overview
This is a camera app that continuously segments the image on a MobileNetv2 model.
Prerequisites
-
You must have Xcode installed
-
You must have a valid Apple Developer ID
-
The demo app requires a camera and must be executed on a real iOS device. You can build it and run with the iPhone Simulator but the app raises a camera not found exception.
-
You don't need to build the entire TensorFlow library to run the demo, it uses CocoaPods to download the TensorFlow Lite library.
-
You'll also need the Xcode command-line tools:
xcode-select --install
If this is a new install, you will need to run the Xcode application once to agree to the license before continuing.
Building the iOS Demo App
-
Install CocoaPods if you don't have it.
sudo gem install cocoapods
-
Install the pod to generate the workspace file:
pod install
If you have installed this pod before and that command doesn't work, trypod update
At the end of this step you should have a file calledHandSegmentation.xcworkspace
-
Open HandSegmentation.xcworkspace in Xcode.
-
Please change the bundle identifier to a unique identifier and select your development team in 'General->Signing' before building the application if you are using an iOS device.
-
Build and run the app in Xcode. You'll have to grant permissions for the app to use the device's camera. Point the camera at various objects and enjoy seeing how the model classifies things!
iOS App Details
The app is written entirely in Swift and uses the TensorFlow Lite Swift library for performing image classification.