📷 VisionCamera Frame Processor Plugin for object detection using TensorFlow Lite Task Vision.
With this library, you can use the benefits of Machine Learning in your React Native app without a single line of native code. Create your own model or find and use one commonly available on TFHub. Implement the solution in a few simple steps:
react-native
>= 0.71.3react-native-reanimated
>= 2.14.4react-native-vision-camera
>= 2.15.4
You can find the model structure requirements here
Install the required packages in your React Native project:
npm install --save vision-camera-realtime-object-detection
# or yarn
yarn add vision-camera-realtime-object-detection
If you're on a Mac and developing for iOS, you need to install the pods (via Cocoapods) to complete the linking.
npx pod-install
Add this to your babel.config.js
[
'react-native-reanimated/plugin',
{
globals: ['__detectObjects'],
},
]
react-native-reanimated
and insert as a first line of your index.tsx
import 'react-native-reanimated'
To add your custom TensorFlow Lite model to your app, copy your *.tflite
file to your asset/model
directory
...
|-- assets
|-- images
|-- fonts
|-- model
|-- your_custom_model.tflite
|-- src
|-- App.tsx
...
Add to your react-native.config.js
...
"assets": [
"./assets/model/",
]
and run command:
npx react-native-asset
🎉 Use Realtime Object Deteciton in your own component!
import { DetectedObject, detectObjects, FrameProcessorConfig } from 'vision-camera-realtime-object-detection';
// ...
const frameProcessorConfig: FrameProcessorConfig = {
modelFile: 'your_custom_model.tflite', // <!-- name and extension of your model
scoreThreshold: 0.5,
};
const frameProcessor = useFrameProcessor((frame) => {
'worklet';
const detectedObjects: DetectedObject[] = detectObjects(frame, frameProcessorConfig);
}, []);
return (
<Camera
device={device}
isActive={true}
frameProcessorFps={5}
frameProcessor={frameProcessor}
/>);
Use the configuration interface to customize the library on your own. In it you can find the following properties:
Prop | Type | Mandatory | Default | Note |
---|---|---|---|---|
modelFile |
string |
✔ | - | The name and extension of your custom TensorFlow Lite model (f.e. model.tflite ) |
scoreThreshold |
number |
- | 0.3 | (between 0 and 1) Cut-off threshold below which you will discard detection result |
maxResults |
number |
- | 1 | Maximum number of top-scored detection results to return. |
numThreads |
number |
- | 1 | the number of threads to be used for TFLite ops that support multi-threading when running inference with CPU. |
detectObjects
method returns a list of detected objects in the lens in the following form
Prop | Type | Note |
---|---|---|
labels |
ObjectLabel[] |
An array of labels to match the detected object |
top |
number |
(percentage: between 0 and 1) absolute position of the detected object's top edge relative to the frame |
left |
number |
(percentage: between 0 and 1) absolute position of the detected object's left edge relative to the frame |
width |
number |
(percentage: between 0 and 1) width of the detected object relative to the frame |
height |
number |
(percentage: between 0 and 1) height of the detected object's top edge relative to the frame |
Prop | Type | Note |
---|---|---|
label |
string |
label matching the detected object |
confidence |
number |
a number between 0 and 1 that indicates confidence that the object of above type was genuinely detected |
List of tasks to be implemented:
- Adjusting to VisionCamera V3 (the future version intends to rewrite frame processors and introduces exciting new features, like: drawing on frame in a Frame Processor using RN Skia)
- CPU and NNAPI delegates for Android
- GPU and Core ML delegates for IOS
- Clean up native code
See the contributing guide to learn how to contribute to the repository and the development workflow.
MIT
Made with create-react-native-library