(Ironically, a prototype itself...) 😅
Status: Work In Progress
- Make it easier to prototype basic Machine Learning apps with SwiftUI
- Provide an easy interface for commonly built views to assist with prototyping and idea validation
- Effectively a wrapper around the more complex APIs, providing a simpler interface (perhaps not all the same functionality, but enough to get you started and inspired!)
Here are a few basic examples you can use today.
- Ensure you have created your Xcode project
- Ensure you have added the PrototypeKit package to your project (instructions above -- coming soon)
- Select your project file within the project navigator.
- Ensure that your target is selected
- Select the info tab.
- Right-click within the "Custom iOS Target Properties" table, and select "Add Row"
- Use
Privacy - Camera Usage Description
for the key. Type the reason your app will use the camera as the value.
Utilise PKCameraView
PKCameraView()
Full Example
import SwiftUI
import PrototypeKit
struct ContentView: View {
var body: some View {
VStack {
PKCameraView()
}
.padding()
}
}
- Required Step: Drag in your Create ML / Core ML model into Xcode.
- Change
FruitClassifier
below to the name of your Model. - You can use latestPrediction as you would any other state variable (i.e refer to other views such as Slider)
Utilise ImageClassifierView
ImageClassifierView(modelURL: FruitClassifier.urlOfModelInThisBundle,
latestPrediction: $latestPrediction)
Full Example
import SwiftUI
import PrototypeKit
struct ImageClassifierViewSample: View {
@State var latestPrediction: String = ""
var body: some View {
VStack {
ImageClassifierView(modelURL: FruitClassifier.urlOfModelInThisBundle,
latestPrediction: $latestPrediction)
Text(latestPrediction)
}
}
}
Utilise LiveTextRecognizerView
LiveTextRecognizerView(detectedText: $detectedText)
Full Example
import SwiftUI
import PrototypeKit
struct TextRecognizerView: View {
@State var detectedText: [String] = []
var body: some View {
VStack {
LiveTextRecognizerView(detectedText: $detectedText)
ScrollView {
ForEach(Array(detectedText.enumerated()), id: \.offset) { line, text in
Text(text)
}
}
}
}
}
- Required Step: Drag in your Create ML / Core ML model into Xcode.
- Change
HandPoseClassifier
below to the name of your Model. - You can use latestPrediction as you would any other state variable (i.e refer to other views such as Slider)
Utilise HandPoseClassifierView
HandPoseClassifierView(modelURL: HandPoseClassifier.urlOfModelInThisBundle,
latestPrediction: $latestPrediction)
Full Example
import SwiftUI
import PrototypeKit
struct HandPoseClassifierViewSample: View {
@State var latestPrediction: String = ""
var body: some View {
VStack {
HandPoseClassifierView(modelURL: HandPoseClassifier.urlOfModelInThisBundle,
latestPrediction: $latestPrediction)
Text(latestPrediction)
}
}
}
This model uses the system sound classifier, and does not currently support custom Sound Classifier Models.
- You can use recognizedSound as you would any other state variable (i.e refer to other views such as Slider)
Utilise recognizeSounds
modifier
.recognizeSounds(recognizedSound: $recognizedSound)
Full Example
import SwiftUI
import PrototypeKit
struct SoundAnalyzerSampleView: View {
@State var recognizedSound: String?
var body: some View {
VStack {
Text(recognizedSound ?? "No Sound")
}
.padding()
.navigationTitle("Sound Recogniser Sample")
.recognizeSounds(recognizedSound: $recognizedSound)
}
}
Is this production ready?
no.