mrousavy/react-native-vision-camera

‼️‼️‼️‼️ ✨ VisionCamera V3 ‼️‼️‼️‼️‼️

mrousavy opened this issue · 261 comments

We at Margelo are planning the next major version for react-native-vision-camera: VisionCamera V3 ✨

For VisionCamera V3 we target one major feature and a ton of stability and performance improvements:

  1. Write-back frame processors. We are introducing a new feature where you can simply draw on frame in a Frame Processor using RN Skia. This allows you to draw face masks, filters, overlays, color shadings, shaders, Metal, etc..)
    • Uses a hardware accelerated Skia layer for showing the Preview
    • Some cool examples like inverted colors shader filter, VHS filter (inspired by Snapchat's VHS + distortion filter), and a realtime text/bounding box overlay
    • Realtime face blurring or license plate blurring
    • Easy to write color correction, beauty filters
    • All in simple JS (RN Skia) - no native code and hot reload while still maintaining pretty much native performance!
  2. Sync Frame Processors. Frame Processors will now be fully synchronous and run on the same Thread as the Camera is running.
    • Pretty much on-par with native performance now.
    • Run frame processing without any delay - everything until your function returns is the latest data.
    • Use runAtTargetFps(fps, ...) to run code at a throttled FPS rate inside your Frame Processor
    • Use runAsync(...) to run code on a separate thread for background processing inside your Frame Processor. This can take longer without blocking the Camera.
  3. Migrate VisionCamera to RN 0.71. Benefits:
    • Much simpler build setup. The CMakeLists/build.gradle files will be simplified as we will use prefabs, and a ton of annoying build errors should be fixed.
    • Up to date with latest React Native version
    • Prefabs support on Android
    • No more Boost/Glog/Folly downloading/extracting
  4. Completely redesigned declarative API for device/format selection (resolution, fps, low-light, ...)
    • Control exactly how much FPS you want to record in
    • Know exactly if a desired format is supported and be able to fall back to a different one
    • Control the exact resolution and know what is supported (e.g. higher than 1080, but no higher than 4k, ...)
    • Control settings like low-light mode, compression, recording format H.264 or H.265, etc.
    • Add reactive API for getAvailableCameraDevices() so external devices can become plugged in/out during runtime
    • Add zero-shutter lag API for CameraX
  5. Rewrite the native Android part from CameraX to Camera2
    • Much more stability as CameraX just isn't mature enough yet
    • Much more flexibility with devices/formats
    • Slow-motion / 240 FPS recording on Android
  6. Use a custom Worklet Runtime instead of Reanimated
    • Fixes a ton of crashes and stability issues in Frame Processors/Plugins
    • Improves compilation time as we don't need to extract Reanimated anymore
    • Doesn't break with a new Reanimated version
    • Doesn't require > Reanimated v2 anymore
  7. ML Models straight from JavaScript. With the custom Worklet Runtime, you can use outside HostObjects and HostFunctions. This allows you to just use things like TensorFlow Lite or PyTorch Live in a Frame Processor and run ML Models fully from JS without touching native code! (See proof of concept PR: facebookresearch/playtorch#199 and working PR for Tensorflow: #1633)
  8. Improve Performance of Frame Processors by caching FrameHostObject instance
  9. Improve error handling by using default JS error handler instead of console.error (mContext.handleException(..))
  10. More access to the Frame in Frame Processors:
    • toByteArray(): Gets the Frame data as a byte array. The type is Uint8Array (TypedArray/ArrayBuffer). Keep in mind that Frame buffers are usually allocated on the GPU, so this comes with a performance cost of a GPU -> CPU copy operation. I've optimized it a bit to run pretty fast :)
    • orientation: The orientation of the Frame. e.g. "portrait"
    • isMirrored: Whether the Frame is mirrored (eg in selfie cams)
    • timestamp: The presentation timestamp of the Frame

Of course we can't just put weeks of effort into this project for free. This is why we are looking for 5-10 partners who are interested in seeing this become reality by funding the development of those features.
Ontop of seeing this become reality, we also create a sponsors section for your company logo in the VisionCamera documentation/README, and we will test the new VisionCamera V3 version in your app to ensure it's compatibility for your use-case.
If you are interested in that, reach out to me over Twitter: https://twitter.com/mrousavy or email: me@mrousavy.com


Demo

Here's the current proof of concept we built in 3 hours:

const runtimeEffect = Skia.RuntimeEffect.Make(`
  uniform shader image;
  half4 main(vec2 pos) {
    vec4 color = image.eval(pos);
    return vec4((1.0 - color).rgb, 1.0);
  }
`);

const paint = paintWithRuntimeEffect(runtimeEffect)

function App() {
  const frameProcessor = useFrameProcessor((frame) => {
    'worklet'
    frame.drawPaint(paint)
  }, [])

  return <Camera frameProcessor={frameProcessor} />
}





Progress

Currently, I spent around ~60 hours to improve that proof of concept and created the above demos. I also refined the iOS part a bit and created some fixes, did some research and improved the Skia handling.

Here is the current Draft PR: #1345

Here's a TODO list:

  • iOS
    • Set up Skia Preview
    • Pass Skia Canvas to JS Frame Processor
    • Make sure we use high performance Skia drawing operations
    • Make sure Frames are not out of sync of the screen refresh rate (60Hz / 120Hz)
    • Do some performance profiling and see if we can improve something
    • Make sure everything continues to work when not using Skia
    • Swap the REA Runtime with the custom Worklet Runtime
    • Implement synchronous Frame Processors
    • Implement runAtTargetFps
    • Implement runAsync
    • Implement toByteArray(), orientation, isMirrored and timestamp on Frame
    • Add orientation to Frame
    • Convert it to a TurboModule/Fabric
    • Rewrite to new simple & declarative API
  • Android
    • Set up Skia Preview
    • Pass Skia Canvas to JS Frame Processor
    • Make sure we use high performance Skia drawing operations
    • Make sure Frames are not out of sync of the screen refresh rate (60Hz / 120Hz)
    • Do some performance profiling and see if we can improve something
    • Make sure everything continues to work when not using Skia
    • Swap the REA Runtime with the custom Worklet Runtime
    • Implement synchronous Frame Processors
    • Implement runAtTargetFps
    • Implement runAsync
    • Implement toByteArray(), orientation, isMirrored and timestamp on Frame
    • Add orientation to Frame
    • Convert it to a TurboModule/Fabric
    • Rewrite from Camera2 to CameraX
    • Rewrite to new simple & declarative API
  • Documentation
    • Create documentation for write-back/Skia Frame Processors
    • Create documentation for synchronous Frame Processors
    • Create documentation for runAtTargetFps
    • Create documentation for runAsync
    • Create a realtime face blurring example
    • Create a realtime license plate blurring example
    • Create a realtime text recognition/overlay example
    • Create a realtime color grading/beauty filter example
    • Create a realtime face outline/landmarks detector example

I reckon this will be around 500 hours of effort in total.

Update 15.2.2023: I just started working on this here: feat: ✨ V3 ✨ #1466. No one is paying me for that so I am doing all this in my free time. I decided to just ignore issues/backlash so that I can work as productive as I can. If someone is complaining, they should either offer a fix (PR) or pay me. If I listen to all issues the library will never get better :)

Write-Back Frame Processors will shape the future of realtime image processing on mobile. For reference, let's compare this example, w/ native iOS/Android vs w/ RN VisionCamera:

Imagine how you'd do that in a native app:

  • ~700 lines of code across ~5 files to set up the Camera accordingly
  • ~300 lines of very low-level C-style Metal code across ~3 files to set up Metal
  • ~40 lines of Metal Shader code to draw the Frame from a Texture (with the box) to the Layer
  • ~30 lines of code to set up the Face Detector Module
  • Then do the same thing again for Android.

In VisionCamera, it's just:

  • ~4 lines of code to set up the <Camera>
  • ~35 lines of code to set up the Face Detector Frame Processor Plugin
  • ~13 lines of code to set up the Frame Processor that detects faces and draws on the view

Plus it already works on both platforms, you have way more flexibility (third party FP Plugins on npm) and can draw more (entire Skia API is available for drawing stuff), and you write basic JS (Command + S to instantly see your changes appear on your device, no need to rebuild).

if VisionCamera will become Fabric only, it will reduce amount potential users drastically, say 10x

Making it Fabric only will make a lot of things way simpler for me, especially talking about the C++/native code buildscripts. For legacy architecture support it requires a bunch of code, making it harder for me to maintain VisionCamera - and VisionCamera is a huge repository that I maintain myself in my free time.

Also, all older versions of VisionCamera still work on the legacy architecture. :)

EDIT: VisionCamera V3 will support the oldarch (Paper) too, so not only Fabric! It will require RN 0.71+ tho :)

Making it Fabric only will make a lot of things way simpler for me, especially talking about the C++/native code buildscripts. For legacy architecture support it requires a bunch of code, making it harder for me to maintain VisionCamera - and VisionCamera is a huge repository that I maintain myself in my free time.

Also, all older versions of VisionCamera still work on the legacy architecture. :)

Sure, i totally understand that it's quite a burden to support both archs, thanx a lot for such an amazing library!
I just wanted to mention, that it's already obvious that migration of real-world existing production apps to Fabric is the question of pretty far future w/o any intention to put request or pressure on You.🐱

I just wanted to mention, that it's already obvious that migration of real-world existing production apps to Fabric is the question of pretty far future w/o any intention to put request or pressure on You.

I disagree, a lot of our clients at Margelo are running Fabric in production. Facebook is too. We can help with the migration of your app if you want -> hello@margelo.io ;)

@mrousavy I really appreciate the effort you and the team at margelo.io are putting into this package. i think it would be great to also add barcode/qrcode scanning by default, because some of the community packages that have this implementation do not maintain them any more for example this.

Thanks

I just wanted to mention, that it's already obvious that migration of real-world existing production apps to Fabric is the question of pretty far future w/o any intention to put request or pressure on You.

I disagree, a lot of our clients at Margelo are running Fabric in production. Facebook is too. We can help with the migration of your app if you want -> hello@margelo.io ;)

Hi
I guess many projects can't migrate to Fabric because they depend on libraries that aren't Fabric compatible yet

Here's a table of those libs
reactwg/react-native-new-architecture#6

i think it would be great to also add barcode/qrcode scanning by default

I have received hundreds of requests like this seems like a much wanted feature haha... It definitely has to be a Frame Processor Plugin the way that VisionCamera is built. But how about this; if we manage to get enough partners/companies to sponsor the VisionCamera V3 project, I'll build a QR Code scanner plugin into VisionCamera or as a separate package maintained by me :)

Very nice of you to maintain this library, i am a big fan but from using this library i have some improvements that should also be looked at:

  • Flash (When using the takesnapshot method the flash i think fires to early and the picture gets ruined)
  • Zoom (It would be way easier to take a look at how expo camera did this by just adding a simple option true/false)
  • Performance (The overall performance on Android is pretty bad, expo camera package is about 0,5 seconds faster)
  • Documentation (The Documentation is not really god, to understand or even find much functions you have to take a look deep in the examples to understand most of it and see all functions/features)

To be clear, that is absolutely no hate and i am a big fan of this package and would love to use it in my apps, but for now i stay at expo camera, because it is faster and the flash is working properly. But i would love to switch, because expo camera has not much features and also has its problems like mirrored images on the front camera :)

Hi @jonxssc,

When using the takesnapshot method the flash i think fires to early and the picture gets ruined

takeSnapshot does not have a flash. It is literally just taking a snapshot of the view.

It would be way easier to take a look at how expo camera did this by just adding a simple option true/false

What? How does zoom={true} or zoom={false} make sense?

The overall performance on Android is pretty bad, expo camera package is about 0,5 seconds faster

The performance is pretty optimized, this is what VisionCamera is about. On iOS, it's really fast and optimized especially for streaming camera frames and initial launch time.
What exactly is 0,5 seconds faster in expo-camera? Open time? Photo time? Can you create a repo to reproduce this?
Remember that it's hard to compare Camera speeds if you're using a different format (e.g. if you use a higher-res camera device/format, it will obviously take longer to open the Camera).

The Documentation is not really god, to understand or even find much functions you have to take a look deep in the examples to understand most of it and see all functions/features

What functions do you mean specifically?

I agree for the QR code scanner

Isn't RN SKia is still proof-of-concept?

Thank you for working on this library! It saved me a ton of work :)
Looking forward to V3!

Thank you for working on this library! It saved me a ton of work :)
Looking forward to V3!

Thank you! Happy to hear it helps :)

Thank you for working on this library! ... Native QR code scanner is most welcome ...

Many opened issues against this project haven't been addressed, and I've had to fork because maintainers have been unresponsive/haven't followed up. It's hard to get excited about V3 when there's still outstanding issues with v2 which the above v3 plan doesn't address. I think addressing clear sticking points (even if it's a 'no, we aren't interested in including this in the project right now'), or at least moving issues that are marked 'question' to discussions to clean things up, would allow people like myself who are potential contributors to understand whether continuing to maintain our own forks is the only reasonable option, or if there is enough alignment between main project and our own goals/expectations to make the forks redundant.

I think addressing clear sticking points (even if it's a 'no, we aren't interested in including this in the project right now'), or at least moving issues that are marked 'question' to discussions to clean things up, would allow people like myself who are potential contributors to understand whether continuing to maintain our own forks is the only reasonable option, or if there is enough alignment between main project and our own goals/expectations to make the forks redundant.

@jpike88 I agree, issues should be handled individually and flagged appropriately - either a "wont-fix" (not on roadmap), or put into backlog and restructured.
Spam issues should be closed, and duplicate ones should be linked / locked.

This is unfortunately really time consuming, and I already have very limited time. Cameras are a pretty complex topic, and I know my way around the VisionCamera codebase since I wrote it, so I think my time is best spent actually writing code - so that's what I do - when I have some free time I will work on the code and improve VisionCamera. E.g. today, I spent 5 hours already to work on V3 (see #1466).

Frankly, most issues are spam or annoying, and I don't have the nerves to process each of those. My library works in all the apps I use it in, and if something breaks for me, I fix it. Luckily for everybody else, I publish that code as open-source on GitHub (here), and if something doesn't work for you, submit a well written PR to fix it and I'll find some time to test that and integrate it if nothing breaks.

If someone wants to help me maintain VisionCamera, I think the best place to start is by triaging/flagging issues, answering questions, closing spam or duplicate issues, and discussing features/fixes with me internally. If anyone wants to do that, let me know :)

I am willing to help you guys use VisionCamera, but I can only do so much. If someone posts a screenshot of a build error and the screenshot just says "Build failed in 65 seconds", I will consider it rude to not even try to fill out the issue appropriately and won't reply myself.

Many opened issues against this project haven't been addressed

Which ones are those exactly? Again, I use VisionCamera in production in a few apps, and it successfully builds, starts, shows Camera at 30 or 60 FPS, takes photos, records videos, and runs a Frame Processor (private code). Both on iOS and Android.

I agree with Marc, developers are becoming more and more demanding. It is an open source project, fork it if you don't like how things are handled here.

Lately I had another problem and had no time to fix it myself, so I hired a freelancer to help me. No big deal. I will PR soon.

@mrousavy As a side note, I think it would be better to create an open collective project so corporate users, like myself in certain cases, will be able to support the project financially.

I will PR soon.

❤️❤️❤️

@mrousavy As a side note, I think it would be better to create an open collective project so corporate users, like myself in certain cases, will be able to support the project financially.

Oh yea, interesting. I thought GitHub sponsors was more convenient and modern, but I'll take a look at open collective!

I thought GitHub sponsors was more convenient and modern, but I'll take a look at open collective!

My thinking is Github Sponsors is more tailored for P2P support. While Open Collective is more like corporate level. I guess Open Collective is more demanding in terms of legal requirements because certain projects can be considered charity foundations and are tax exempt.

I agree with @mrousavy on the part that vision camera works fine on most production Apps. Yeah you might need some patch-package or some dependency version locking or other not-so-fancy workarounds but you got the camera library for free.

Most "issues" related to build fails or whatnot are not issues, those are counterparts of rookie C programmer hit by a SIGSEGV and trying to blame on the compilers/IDEs.

(By the way vision camera is the best library my team encounter in the RN world. My team couldn't make a camera library that's both very extensible and feature rich. Some people these days really need to learn to appreciate what they have, not demanding to be babysitted all day long.)

By the way vision camera is the best library my team encounter in the RN world. My team couldn't make a camera library that's both very extensible and feature rich. Some people these days really need to learn to appreciate what they have, not demanding to be babysitted all day long.

Much appreciated @zzz08900 ❤️❤️❤️❤️❤️❤️❤️

Yeah you might need some patch-package or some dependency version locking

Working on V3 right now, and it seems that this solves most of the build issues! :)

Well, I've noticed most of those "issues" with build errors occure after upgrading RN version so I would say many of them are not caused by vision camera or reanimated, but originates from a bad RN upgrading.

I had those problems before, like library X stops working after RN upgrade, so now I just ditch the whole android/iOS folders, bump dependencies and re-configure new RN version from base template - that always work.

I guess the above method should be in the RN official documentation.

@mrousavy Thank you for this great project! It's hard to wait for the new RNVC.
I read the commits in v3 branch, and there seem to be more updates than items checked in the to-do list here.
Could you please update the to-do list? I want to test the new features of RNVC available now.

Yeah I figured a few more features that are cool to add in v3, I'll update todo list maybe today

I just updated the task list!

Added the two points:

  • synchronous Frame Processors (which I just merged into feat: ✨ V3 ✨ #1466, woohoo!!! 🥳)
  • the PyTorch Live integration which comes with the REA -> RN Worklets rewrite

I just released a pre-release for v3 here: v3.0.0-rc.1
You can test the latest V3 release by creating a new RN project with RN 0.71 and installing VisionCamera + RNWorklets:

yarn add react-native-vision-camera@3.0.0-rc.1
yarn add react-native-workets@https://github.com/chrfalch/react-native-worklets#15d52dd

Things to test:

  • New Android build script. This should drastically speed up the build time! 💨
  • New Worklet library. This replaces Reanimated Worklets. Should be faster and more stable :)
  • New synchronous Frame Processors. Should be faster :)
  • runAtTargetFps and runAsync in Frame Processors
  • Using HostObjects or HostFunctions (like models from PyTorch) inside a Frame Processor. This will probably require a few native bindings on PyTorch's end to make the integration work (cc @raedle)

Overall V3 is close to completion. I have a few things to do the coming days so not sure how much work I can put into this. If anyone wants to support the development of v3, I'd appreciate donations / sponsors: https://github.com/sponsors/mrousavy ❤️ :)

I tried create a new RN project with RN 71 and add the dependencies but it doesn't build.

See
image

Example folder works fine.

I just updated the task list!

Added the two points:

  • synchronous Frame Processors (which I just merged into feat: ✨ V3 ✨ #1466, woohoo!!! 🥳)
  • the PyTorch Live integration which comes with the REA -> RN Worklets rewrite

I just released a pre-release for v3 here: v3.0.0-rc.1 You can test the latest V3 release by creating a new RN project with RN 0.71 and installing VisionCamera + RNWorklets:

yarn add react-native-vision-camera@3.0.0-rc.1
yarn add react-native-workets@https://github.com/chrfalch/react-native-worklets#15d52dd

Things to test:

  • New Android build script. This should drastically speed up the build time! 💨
  • New Worklet library. This replaces Reanimated Worklets. Should be faster and more stable :)
  • New synchronous Frame Processors. Should be faster :)
  • runAtTargetFps and runAsync in Frame Processors
  • Using HostObjects or HostFunctions (like models from PyTorch) inside a Frame Processor. This will probably require a few native bindings on PyTorch's end to make the integration work (cc @raedle)

Overall V3 is close to completion. I have a few things to do the coming days so not sure how much work I can put into this. If anyone wants to support the development of v3, I'd appreciate donations / sponsors: https://github.com/sponsors/mrousavy ❤️ :)

I had those problems before, like library X stops working after RN upgrade, so now I just ditch the whole android/iOS folders, bump dependencies and re-configure new RN version from base template - that always work.

@zzz08900 I agree, that's a good approach to upgrading. I always do that. :)

@migueldaipre please don't post issues here, only high-level feedback. Thanks. I'll look into this later though

Well, I've noticed most of those "issues" with build errors occure after upgrading RN version so I would say many of them are not caused by vision camera or reanimated, but originates from a bad RN upgrading.

I had those problems before, like library X stops working after RN upgrade, so now I just ditch the whole android/iOS folders, bump dependencies and re-configure new RN version from base template - that always work.

I guess the above method should be in the RN official documentation.

No need to.
They have upgrade tool where you can look at all necessary changes that you need to do.

https://react-native-community.github.io/upgrade-helper/

I got some questions and ideas on the new frame processors.
They are a bit long so I made a new discussion here.
https://github.com/mrousavy/react-native-vision-camera/discussions/1481

bctt commented

I just released a pre-release for v3 here: v3.0.0-rc.1
You can test the latest V3 release by creating a new RN project with RN 0.71 and installing VisionCamera + RNWorklets:

Will V3 require RN 0.71?

Will V3 require RN 0.71?

Yes. Due to the significant simplification of the buildscript, I fixed a bunch of build errors and made the whole building process much more robust.
I had to drop backwards compatibility for this, otherwise this would've gotten too complicated.

However I managed to not rely on the new arch, so it works on both old and new arch! (Paper and Fabric)

Another surprise for v3 - I added three new props and one new function to the Frame object:

  • toByteArray(): Gets the Frame data as a byte array. The type is Uint8Array (TypedArray/ArrayBuffer). Keep in mind that Frame buffers are usually allocated on the GPU, so this comes with a performance cost of a GPU -> CPU copy operation. I've optimized it a bit to run pretty fast :)
  • orientation: The orientation of the Frame. e.g. "portrait"
  • isMirrored: Whether the Frame is mirrored (eg in selfie cams)
  • timestamp: The presentation timestamp of the Frame

see #1487

AlkanV commented

hello @mrousavy,
i really like what you are doing here and i am fig fan of the project.
couple of months ago, just before you released road map for v3, i have give it a go over rn-camera to see if we can integrate it to our project or not. since this library produce big media files in android, we parked the integration process. this was due to cameraX integration over camera2, where we are not able to 'really' modify bitrates, instead we can only give some preset props, which was not working as we expected. i got notification that there is a RC version of v3, and i immediately check if you re-write the camera2 module, unfortunately not :(. do you think the above implementation #1487 , would fix this problem on android?

best wishes,
a.

i really like what you are doing here and i am fig fan of the project.

Thank you! :)

do you think the above implementation #1487 , would fix this problem on android?

No, this has nothing to do with media writing. This exposes new props to the Frame Processor.

i got notification that there is a RC version of v3, and i immediately check if you re-write the camera2 module, unfortunately not :(

Read the things I have planned for V3 at the top of this issue:

  1. ...
  2. Rewrite the native Android part from CameraX to Camera2
  3. ...

I have it planned, but I think this will be a big project. This will take like 100 to 300 hours worth of effort, and frankly I don't have that time right now unless I decide to become a cocaine addict.

So I'm thinking to release V3 as a kind of beta and just have some features not work on Android, and if people get together to raise some money for getting those features implemented on Android as well I can more freely allocate that time through my agency (Margelo) to work on this.

i think it would be great to also add barcode/qrcode scanning by default

I have received hundreds of requests like this seems like a much wanted feature haha... It definitely has to be a Frame Processor Plugin the way that VisionCamera is built. But how about this; if we manage to get enough partners/companies to sponsor the VisionCamera V3 project, I'll build a QR Code scanner plugin into VisionCamera or as a separate package maintained by me :)

Or offer it as a paid tier for enterprise usage. OSS for personal usage. The ones I work for don't understand open source, they figure it's just free. They understand license fees though.

We use this for our Cordova apps, works like a charm https://github.com/phonegap/phonegap-plugin-barcodescanner

Vednus commented

Just an FYI. You need to pin skia at 0.1.175 for the rc2 to work. I was getting the error 'include/gpu/GrRecordingContext.h' file not found when trying to build on XCode with 0.1.176.

I suppose that VisionCamerea V3 records the video just as shown on the screen.
This means, a drawback frame processor also affects the video recording.

Sometimes those need to differ, the preview on the screen and the video to record.
For example, I want to watermark the video to record, but I don't like the watermark to be shown on the screen..
If the video processing should be done on the original video after the user finishes the recording, it would be quite inconvenient for the user who needs to wait until the saving is done.
It's ok with the photo, we'll be able to manipulate the photo before saving it thanks to the new API of V3. And the photo processing is cheap enough not to bother the user.
So I think it would be better to have a way to explicitly pass back the frame to be save to the video.

Is it achievable? There might be API design issues and we need time to think about it. Before that, I'd like to know if it is possible with the current internal of VisionCamera V3.

Hi @bglgwyng, interesting point. Let me quickly explain:

Cameras work by combining a set of inputs to a set of outputs.

For example, those are the inputs that we have:

  • Video Input (a Camera device)
  • Audio Input (a Microphone)

And those are the outputs:

  • Preview View
  • Photo Capture
  • Record Video
  • Frame Processor

This is too much for a Camera to handle at the same time, so we do some tricks here.

On Android, we can skip the Photo Capture and do a snapshot/screenshot of the view instead, so that we only have three outputs (Preview, Video, Frame Processor, but no Photo Capture).
On iOS, I combine Video Recording and Frame Processing into one, so that I get one Frame, which runs the Frame Processing algorithm, and then records it to a file if there is an active recording.

In other words; I don't have two pipelines for streaming frames here. I might be able to restructure this a bit, especially with the Skia Preview (maybe we can combine FP + Preview instead?)

Anyways, right now in the current V3 version, you can only draw to the Preview View, not to a Recorded Video or Photo. That still has to be done in post-processing.
I want to implement that however, but no one is paying me to do so, so I just wait until I have the free time. If you want to accelerate this, consider sponsoring me on GitHub, or reaching out to me for a more corporate deal thingy

@mrousavy When Vision Camera V3 wil Launch?

I don't have a timeline for this. I work on this in my freetime, so I'm not setting any deadlines, I work on this whenever I want to and whenever I have some free time - if you want to accelerate this, consider sponsoring me on GitHub.

With that being said, idk, could be 1 month could be 3 months. Still a fair bit of work to do, we're talking about ~500 hours of effort here just for a V3 version

@mrousavy What is that 35 lines of code to set up the Face Detector Frame Processor Plugin? will it be prebuilt in camera?

will it be prebuilt in camera?

No, VisionCamera's focus is to be as lean as possible. No opinionated frame processor code, so the user can tweak it however he wants to.

Prebuilding stuff into the Camera just makes it bloated and messy, that's why I keep it modular/plugin-based.

hello,

thank you for the effort and time invested in this lib.
i just tested the rc-2 on android the CameraPackage is coming from the example package =>
// react-native-vision-camera import com.mrousavy.camera.example.CameraPackage;
i think there is a small typo in generation scripts

Hi yep I fixed this in one of the latest commits, I didn't publish a new version yet. Will work on Android first, then publish a new prerelease.

@mrousavy Do VisionCamera V3 will reduce its size? Its 500MB increases react native app size by a lot beside that its great!

What? What's 500MB?

is this still valid ? because cannot resolve the react-native-worklets from the source

@mrousavy 500mb the size of the package is a lot it increases the app size by a lot if you do npm install -g cost-of-modules and then Run cost-of-modules in the directory you are working in the size is 500mb

@syblus446 you're looking at buildcache. The package size is not 500MB.

On npmjs.org you can see that the size of the react-native-vision-camera package is 747kB, and the actual JS bundle part of that is even smaller:

image

Is there some trick to get tap to focus working? I have been integrating v3 into an app as a replacement for an older camera library but on my iPhone 13 I cannot get the tap to focus working

<TapGestureHandler
  numberOfTaps={1}
  onEnded={async ({ nativeEvent }) => {
  if (device?.supportsFocus) {
     await camera.current!.focus({ x: nativeEvent.x, y: nativeEvent.y });
  }
}}>
  <Camera ... />
</TapGestureHandler>

Is there some trick to get tap to focus working? I have been integrating v3 into an app as a replacement for an older camera library but on my iPhone 13 I cannot get the tap to focus working

<TapGestureHandler
  numberOfTaps={1}
  onEnded={async ({ nativeEvent }) => {
  if (device?.supportsFocus) {
     await camera.current!.focus({ x: nativeEvent.x, y: nativeEvent.y });
  }
}}>
  <Camera ... />
</TapGestureHandler>

device.focusMode = .continuousAutoFocus
if device.isExposurePointOfInterestSupported {
device.exposurePointOfInterest = normalizedPoint
device.exposureMode = .continuousAutoExposure

Fix: #1541

Another issue: this new method does not match what videoPreviewLayer.captureDevicePointConverted(fromLayerPoint: point) would generate. This means any tap to focus event will not be sent to the same spot that the touch event happened at

/// Converts a Point in the UI View Layer to a Point in the Camera Frame coordinate system
func convertLayerPointToFramePoint(layerPoint point: CGPoint) -> CGPoint {

@mrousavy It's possible that this package has a large number of dependencies or includes large files that contribute to its size. you can create a new react native project and install react-native-vision-camera and then do npm install -g cost-of-modules and then cost-of-modules it's too large

It also increases app bundle size when it comes to doing production

@levipro yea I left out tap to focus on Android now as well. On iOS, the code should work though, it did work in previous versions.. 🤔 Does videoPreviewLayer.captureDevicePointConverted(fromLayerPoint: point) not do what it did previously?

@mrousavy It's possible that this package has a large number of dependencies or includes large files that contribute to its size. you can create a new react native project and install react-native-vision-camera and then do npm install -g cost-of-modules and then cost-of-modules it's too large

It also increases app bundle size when it comes to doing production

@syblus446 again, this is off-topic. Create a separate issue for this. This is the V3 discussion.

It's possible that this package has a large number of dependencies

It has zero runtime dependencies. It has peerDependencies on react and react-native, which are packages you always have installed if you use VisionCamera.
Maybe you're looking at devDependencies:

"devDependencies": {
"@expo/config-plugins": "^4.0.0",
"@jamesacarr/eslint-formatter-github-actions": "^0.1.0",
"@react-native-community/eslint-config": "^3.0.1",
"@react-native-community/eslint-plugin": "^1.1.0",
"@release-it/conventional-changelog": "^3.3.0",
"@types/react": "^17.0.21",
"@types/react-native": "^0.65.5",
"eslint": "^7.32.0",
"pod-install": "^0.1.27",
"prettier": "^2.4.1",
"react": "^17.0.2",
"react-native": "^0.66.0",
"react-native-builder-bob": "^0.18.1",
"react-native-reanimated": "^2.3.0-beta.2",
"release-it": "^14.11.5",
"typescript": "^4.4.3"
},

It also increases app bundle size when it comes to doing production

By how much? Do you have a before and after comparison?

@levipro yea I left out tap to focus on Android now as well. On iOS, the code should work though, it did work in previous versions.. 🤔 Does videoPreviewLayer.captureDevicePointConverted(fromLayerPoint: point) not do what it did previously?

The old version (2.15.4) definitely did not work. The continuous autofocus overrides everything and the tap does nothing. The secondary problem that was introduced in v3 (I am assuming to make accommodation for the skia preview option) is that the custom captureDevicePointConverted method was added to replace calling ...videoPreviewLayer.captureDevicePointConverted(fromLayerPoint: point). This is a problem because it is calculating an entirely different coordinate. So when the videoPreviewLayer is actually available that built in method should be used. I have made that change as well. Now the tap to focus works properly and the coordinate system for that event is also working again

@mrousavy You can see its size any solution to fix it?
Capture

I wrote a custom frame processor in object-oriented style on iOS(622d383).
However, FrameProcessorPlugins is an empty object and it seems that my frame processor is not detected.
Here is the repo. It contains ExamplePlguin in the RNVC3's example.
Do I have additional steps to register a frame processor? I just created the same files that the RNVC3's example has and this way worked in RNVC V2.

Also, the latest version in v3 branch throws a build error on iOS.

/.../node_modules/react-native-vision-camera/ios/CameraError.swift:264:21: error: cannot find type 'SystemError' in scope
  case system(_ id: SystemError)

It can be easily fixed by commenting on that line. I made a patch script, though I'm not sure this is the right way.
I wonder if others saw the same error. I'm using XCode 14.2.

@bglgwyng ah yep, this requires one additional step now - you need to manually register the plugin, just like on Android. This can be done either with a + load method, or in your AppDelegate start func. So call [ExamplePlugin registerPlugin] Exactly here

Also - yep the SystemError not found thing is already fixed but not yet released, thx :)

@mrousavy Hello Marc VisionCamera is a great package but I use your package react-native-jsi-image But unfortunately, it's not working with the latest version of react native Can you please give me a solution to fix that I tried creating an issue on the repo but I can't found any solution of that problem CMake Error at CMakeLists.txt:6 (add_library): Cannot find source file: ../cpp/TypedArray/TypedArray.cpp

After Chatgpt it gives me this response when none of the solution work can you please help me?
Capture

@bglgwyng did registering the FP in AppDelegate work for you?

@bglgwyng did registering the FP in AppDelegate work for you?

Yes, it worked! I'm sorry I didn't let you know. I just thought a reaction was enough.

All good, glad to hear it works!
Currently don't have a lot of free time but I'll continue the work on V3 soon thanks to some new sponsorships I got!

you're a wizard harry - thanks for all your generosity!

ratz6 commented

Any ETA reg V3 ?

I have recently been working on updating https://github.com/rodgomesc/vision-camera-code-scanner/ to support VisionCamera V3. @bglgwyng and I came across an interesting compile error when trying to build projects with use_frameworks! :linkage => :static. It is the same error referenced here: #1043.

I found I could work around this error by patching in an empty VisionCamera.h file and referencing it in the podspec. Some auto-generated code during the XCode build process apparently expects this header to exist. I'm not sure if this is the right way to go about solving this problem, but it does fix the compile error. Perhaps the community can comment on whether it is reasonable or if there is a preferable alternative? If there are no objections, I could submit a pull request with the fix.

Interesting, shouldn't this header be generated by CocoaPods? If nothing breaks, I'm happy to use this header.

I made a simple patch script that makes react-native-skia@^0.1.176 work with vision camera V3. It's just a revert of the diff between react-native-skia@0.1.175(the last version that works with vision camera V3) on the podspec file and react-native-skia@0.1.185. You can use the latest version of react-native-skia with vision camera V3 by applying this patch.
Since it's a revert of a library, I'm very suspicious of this being a proper solution. Does anyone have a better solution?

Good idea @bglgwyng! I'm talking with @chrfalch to figure this one out and make sure the newest version stays compatible, it's probably gonna be a change on my end, not on Skia's end.

Another exciting update: I'm flying in @thomas-coldwell to our Vienna office next week and we'll do a week long hackathon on VisionCamera V3. We'll attempt to rewrite the Android part to Camera2, try the RN Skia part implementation there, and ideally fix a few bugs (like the build error with Skia) and create a few examples, I'll keep you guys updated. 🚀

I have discovered that it is not possible to directly access reanimated's shared value from the frame processor. As an example, the code snippet provided below:

const rotationSensor = useAnimatedSensor(SensorType.ROTATION);

const frameProcessor = useFrameProcessor((frame) => {
  'worklet';
  console.info(rotationSensor.sensor.value);
}, []);

results in the following error message:

Error: Reading from `_value` directly is only possible on the UI runtime, js engine: hermes

It appears that this error occurs because the frame processor is not considered to be reanimated's worklet.

I am left wondering if there is currently, or will be in the future, a way to read and write the value of shared values across different types of worklets.

@mrousavy When will the Face Detector Module will launch with react-native-vision-camera

I am left wondering if there is currently, or will be in the future, a way to read and write the value of shared values across different types of worklets.

@bglgwyng so this is because I am using react-native-worklets instead of Reanimated.
Since Reanimated 3, we have similar benefits as react-native-worklets, however I cannot use Reanimated because they don't provide the API's that I need in VisionCamera:

  • Creating a new Worklet context from C++
  • Creating a new Worklet context from JS
  • Creating a Worklet on a specific Worklet Context from C++
  • Creating a Worklet on a specific Worklet Context from JS
  • Calling said Worklet from C++
  • Calling said Worklet from JS
  • Build stability/no breaking changes with new REA versions

I talked with @tomekzaw about this, but at the moment I don't have the time to implement this for Reanimated. If the Software Mansion team implements those APIs, I can easily switch to REA :)

@thomas-coldwell and I just finished building a really cool demo.

This shows how groundbreaking VisionCamera V3 really is for the entire mobile camera industry - instead of building two highly complex native apps and integrating shaders, canvases, Texture Views, TFLite models, etc. into your app, you can simply use VisionCamera's easy to use JavaScript APIs.

For running the model, we have a Tensorflow Lite abstraction. The model can easily be tweaked and swapped out at runtime using hot-reload.

For drawing the blur and the hands, we use @shopify/react-native-skia. That's all C++/GPU based operations as well.

For running the Frame Processor we use the native Camera APIs which are fast and synchronous.

The entire abstraction is 1ms slower than a fully native app, which is nothing in my opinion.

Here's the tweet

demo.mov
See code
const blurEffect = Skia.RuntimeEffect.Make(FACE_PIXELATED_SHADER);
if (blurEffect == null) throw new Error('Shader failed to compile!');
const blurShaderBuilder = Skia.RuntimeShaderBuilder(blurEffect);
const blurPaint = Skia.Paint();

const linePaint = Skia.Paint();
linePaint.setStrokeWidth(10);
linePaint.setStyle(PaintStyle.Stroke);
linePaint.setColor(Skia.Color('lightgreen'));
const dotPaint = Skia.Paint();
dotPaint.setStyle(PaintStyle.Fill);
dotPaint.setColor(Skia.Color('red'));

// load the two Tensorflow Lite models - those can be swapped out at runtime and hot-reloaded - just like images :)
const faceDetection = loadModel(require("../assets/face_detection.tflite"))
const handDetection = loadModel(require("../assets/hand_detection.tflite"))

const frameProcessor = useFrameProcessor((frame) => {
  'worklet'

  // runs native TensorFlow Lite model on GPU - fast!
  const { faces } = faceDetection.run(frame)

  // blur faces using Skia
  for (const face of faces) {
    const centerX = (face.x + face.width / 2);
    const centerY = (face.y + face.height / 2);
    const radius = Math.max(face.width, face.height) / 2;

    blurShaderBuilder.setUniform('x', [centerX]);
    blurShaderBuilder.setUniform('y', [centerY]);
    blurShaderBuilder.setUniform('r', [radius]);
    const imageFilter = Skia.ImageFilter.MakeRuntimeShader(blurShaderBuilder, null, null);
    blurPaint.setImageFilter(imageFilter);

    frame.render(blurPaint);
  }

  // runs native TensorFlow Lite model on GPU - fast!
  const { hands } = handDetection.run(frame)

  // show hand outlines using Skia
  for (const hand of hands) {
    // green lines:
    for (const line of hand.lines) {
      frame.drawLine(
        line.from.x,
        line.from.y,
        line.to.x,
        line.to.y,
        linePaint
      )
    }
    // red dots:
    for (const dot of hand.dots) {
      frame.drawCircle(
        dot.x,
        dot.y,
        dot.size,
        dotPaint
      )
    }
  }
}, [linePaint, dotPaint])

return <Camera frameProcessor={frameProcessor} />

nice work @mrousavy , when it will be available v3 i mean ? or at least a rc-3

@mrousavy From where can we get face_detection.tflite? is there github repo for this demo code?

@mrousavy From where can we get face_detection.tflite? is there github repo for this demo code?

It might be Blazeface. The pre-trained model file can be obtained at https://github.com/ibaiGorordo/BlazeFace-TFLite-Inference/tree/main/models.

However, as far as I know, since ML Kit face detector also uses the same model, I'm not sure you can see the difference in accuracy or performance compared to vision-camera-face-detector. Also, you need to write the post-processing of inference, such as filtering the bounding box. This is where improvements can be made. It is also worth mentioning that ML Kit's face tracking is not very stable.

@bglgwyng vision-camera-face-detector is good but fails my release build on Android every time I try will this work with react native its written in python?

@xts-bit I don't know about the Android build issue and perhaps it'd be better to be discussed in a separate issue. You can find .tflite file in the link I provided.

Sorry for the misinformation. I just found the branch of that cool demo #1586.

@bglgwyng I checked this branch there are other face detector files however I can't find that face_detection.tflite"file as code example

@mrousavy When will vision camera v3 will be launch? will it support ios simulators in v3? When will face detector modules will will be launch?

Thanks @bglgwyng, yep it uses BlazeFace but for now the FP Plugin was built with MediaPipe. I wanted to experiment more with a general purpose TFLite FP Plugin, but I focused on Android the past few days.

@mrousavy from where did you get face_detection.tflite? can i get this file?

Would it be possible to plug in your own text recognition model and have it detect text using that?

@mrousavy You are doing amazing work, hats off to you, currently we are using in production ready app, but when we record video on front camera it flip the video in preview, while ios is working fine, we have fixed it but it takes much time while processing flipping issue., looking forward to build in support for this

Have you considered the possibility of passing a Skia surface to the frame processor alongside the existing ImageProxy on Android and CMSampleBufferRef on iOS? This would enable the utilization of Skia for image processing while avoiding unnecessary memory allocation and copying.

To provide some context, I encountered a situation where I needed to perform preprocessing on an image for ML inference. Due to the limited image processing functions available in Android (specifically, cropping an image by a rotated rectangle), I resorted to using OpenGL. This involved creating an OpenGL texture from the video frame by obtaining the buffer and converting the color space from YUV to RGB. However, I realized that the vision camera already performs a similar operation. To avoid redundancy and optimize performance, I began exploring the possibility of using the same buffer for preview purposes.

Given my limited knowledge of Android and iOS image objects, I wanted to confirm my understanding of the current situation before suggesting any changes. Currently, I use the Bitmap object on Android and CIImage on iOS for image processing, both obtained from the frame object passed to the frame processor. I would like to know if using these objects results in additional memory allocation and copying, or if they simply act as wrappers for the original buffer without allocating extra memory. If they are just wrappers, then my current approach doesn't affect performance, despite its suboptimal nature of duplicating boilerplate code. However, if extra memory allocation or copying occurs, many frame processors would make it happen unnecessarily.

I referred to it as a Skia surface earlier in my comment, but I'm uncertain if it's the correct object for the described usage. Does the current implementation of the V3 camera have an object that can be utilized for this purpose?

Hey @bglgwyng

Have you considered the possibility of passing a Skia surface to the frame processor alongside the existing ImageProxy on Android and CMSampleBufferRef on iOS? This would enable the utilization of Skia for image processing while avoiding unnecessary memory allocation and copying.

not sure if I understand correctly, but the purpose of V3 is to allow you to draw on a Skia context. The Skia canvas/surface/context is passed to the Frame Processor, and you can draw on it straight from JS.

On the native side (in a FP Plugin) you receive the Frame. The Frame object will hold the native buffer.
You can directly use that buffer, without making any copies. Afaik CIImage does not copy, not sure about Bitmap.

Converting from YUV to RGB does copy, but often you need to do that. On iOS, the Frame Processor Frames are already in RGB when using Skia, but YUV when not using Skia. I think I should also add a prop to the Camera to configure it for which colorspace to use, RGB or YUV.

@mrousavy Thank you for your response! I apologize for any confusion caused by my previous question. Allow me to clarify my query.

I did not intend to suggest drawing on the Skia surface within the native FP plugin. I understand that it may not have any immediate practical value. Instead, I was considering utilizing the Skia texture that contains the original frame to preprocess the image and obtain the tensor object.

To provide a more detailed overview of my current situation, I have shared a face recognition demo. The current flow of the demo, without incorporating Skia surface, is as follows:

graph TD;
    CMSampleBuffer-->CVImageBuffer-1;
    CVImageBuffer-1-->SkiaTexture;
    SkiaTexture--drawback frame proceessor-->SkiaSurface-Preview;
    CMSampleBuffer-->CVImageBuffer-2;
    CVImageBuffer-2-->CIImage;
    CIImage--crop face with CoreImage operations-->CGImage;
    CGImage-->Tensor;
Loading

Since I am not satisfied with the functionality of CIImage, I contemplated using Skia to preprocess the image and obtain the tensor object. This would result in the following modified flow:

graph TD;
    CMSampleBuffer-->CVImageBuffer-1;
    CVImageBuffer-1-->SkiaTexture-1;
    SkiaTexture-1--drawback frame proceessor-->SkiaSurface-Preview;
    CMSampleBuffer-->CVImageBuffer-2;
    CVImageBuffer-2-->SkiaTexture-2;
    SkiaTexture-2--crop face with Skia operations-->Tensor;
Loading

I have concerns about the potential creation of an additional copy when generating the Skia texture from CVImageBuffer. Although this issue may not arise in iOS, I wonder how it would work when creating the texture from ImageProxy on Android. It's possible that creating a Bitmap object first to create the texture, and converting the ImageProxy to a Bitmap object, may involve additional copies due to the YUV to RGB translation process.

Considering these factors, I arrived at the following alternative approach:

graph TD;
    CMSampleBuffer-->CVImageBuffer;
    CVImageBuffer-->SkiaTexture;
    SkiaTexture--drawback frame proceessor-->SkiaSurface-Preview;
    SkiaTexture--crop face with Skia operations-->Tensor;
Loading

In this scenario, the native FP plugin receives the Skia texture as a parameter, such as frame.skiaTexture, and can utilize it internally.

I would appreciate your insights on the feasibility of this approach. If my understanding of the flow is incorrect, please kindly correct me.

huh, that's interesting. The native skiaTexture would be a void* and you'd have to go into C++, then make sure all Skia bindings and linkings are set up properly, and then you can use the Texture, but I guess that could work.

What you could do in the latest Skia version is to just do frame.takeSnapshot() and you'd get an SkImage of the currently rendered Canvas - but again, this is a copy.
I'll think about this!

yarn add react-native-vision-camera@rc
yarn add react-native-worklets@https://github.com/chrfalch/react-native-worklets
yarn add @shopify/react-native-skia

hi @mrousavy,
Looking to support this awesome library. Have been following it for a while and wondering is the development for this dead? saw a lot happening around feb-march. if this is alive and kicking would love to support with at least $20 a month to support it

is the development for this dead?

No, it's not dead, I'm exploring different things on other branches and working on it from time to time. I'm not working on VisionCamera fulltime obviously, but it's coming along. The challenge is a bit harder than I originally anticipated, Skia interop is quite a complex task

On RN 0.71.7 I was trying to use rc2 version then suddenly I faced with this issue

/Users/yadigarberkayzengin/projects/yabu/RNBugMapper/android/app/build/generated/rncli/src/main/java/com/facebook/react/PackageList.java:83: error: cannot find symbol
import com.mrousavy.camera.example.CameraPackage;
                                  ^
  symbol:   class CameraPackage
  location: package com.mrousavy.camera.example
/Users/yadigarberkayzengin/projects/yabu/RNBugMapper/android/app/build/generated/rncli/src/main/java/com/facebook/react/PackageList.java:169: error: cannot find symbol
      new CameraPackage(),
          ^
  symbol:   class CameraPackage
  location: class PackageList

I'm pretty aware it's in the early development phase, just wanted to contribute if you find this useful.
I'd like to give a hand too, I wish I had fewer burry on my shoulders at the moment

@mrousavy Hi Mark!

I also wanted to try out v3 rc2 in my project but with no luck. I got the following error:

> Configure project :react-native-vision-camera
react-native-vision-camera: Skia integration is enabled!
5 actionable tasks: 5 up-to-date

FAILURE: Build failed with an exception.

* Where:
Build file 'D:\Work\RNEngravingApp\node_modules\react-native-vision-camera\android\build.gradle' line: 150

* What went wrong:
A problem occurred evaluating project ':react-native-vision-camera'.
> Project with path ':react-native-worklets' could not be found in project ':react-native-vision-camera'.

However it is present in my package.json

{
  "name": "RNEngravingApp",
  "dependencies": {
    "react": "18.2.0",
    "react-native": "0.71.10",
    "react-native-reanimated": "2.17.0",
    "react-native-worklets": "0.0.1-alpha",
    "react-native-vision-camera": "3.0.0-rc.2",
    "@shopify/react-native-skia": "0.1.193"
  },
}

I used yarn to install the package and deleted the lock file also. I also reached out to you in e-mail to support the project.

fukemy commented

you

@mrousavy Hi Mark!

I also wanted to try out v3 rc2 in my project but with no luck. I got the following error:

Configure project :react-native-vision-camera
react-native-vision-camera: Skia integration is enabled!
5 actionable tasks: 5 up-to-date

FAILURE: Build failed with an exception.

  • Where:
    Build file 'D:\Work\RNEngravingApp\node_modules\react-native-vision-camera\android\build.gradle' line: 150
  • What went wrong:
    A problem occurred evaluating project ':react-native-vision-camera'.

Project with path ':react-native-worklets' could not be found in project ':react-native-vision-camera'.

However it is present in my package.json

{ "name": "RNEngravingApp", "version": "0.0.1", "private": true, "scripts": { "android": "react-native run-android", "ios": "react-native run-ios", "lint": "eslint .", "start": "react-native start", "test": "jest" }, "dependencies": { "react": "18.2.0", "react-native": "0.71.10", "react-native-reanimated": "2.17.0", "react-native-worklets": "0.0.1-alpha", "react-native-vision-camera": "3.0.0-rc.2", "@shopify/react-native-skia": "0.1.193" }, "devDependencies": { "@babel/core": "^7.20.0", "@babel/preset-env": "^7.20.0", "@babel/runtime": "^7.20.0", "@react-native-community/eslint-config": "^3.2.0", "@tsconfig/react-native": "^2.0.2", "@types/jest": "^29.2.1", "@types/react": "^18.0.24", "@types/react-test-renderer": "^18.0.0", "babel-jest": "^29.2.1", "eslint": "^8.19.0", "jest": "^29.2.1", "metro-react-native-babel-preset": "0.73.9", "prettier": "^2.4.1", "react-test-renderer": "18.2.0", "typescript": "4.8.4" }, "jest": { "preset": "react-native" } }

I used yarn to install the package and deleted the lock file also. I also reached out to you in e-mail to support the project.

you need to install exactly this version
"react-native-worklets": "github:chrfalch/react-native-worklets#d62d76c",

@fukemy Thanks, now it goes along with the previous Issue, now I have the same problem as @yadigarbz on Android. Maybe the current version only works on iOS? I thought only the features will be less on Android and will not have build errors. But still I am excited for this release and cant wait to see it working on Android too.

fukemy commented

I am testing for IOS first, I got error build on Android too belong of caknzckaovjbsdovbsdovjb... problem, i have to disable auto linki for visioncamera on Android, and waiting for news from v3