hanleyweng/CoreML-in-ARKit

200mb+ CoreML Models causes ARKit hiccups

oishi89 opened this issue · 4 comments

Hi @hanleyweng
I'm doing a same thing as your project and I'm dealing with a big performance problem when running with a bigger coreml model 200mb. Whenever the coreml perform its request the arkit camera previewer get a flick. It seems like the coreml performing is blocked the rendering for a few milliseconds. I tried to run the bigger model on your project as well and there was.the same issues ! do you have any idea ?

Hi @oishi89 , Not entirely sure why this issue is happening to you. The multithreading should mean that the model shouldn't cause hiccups for the camera. Apple's VGG16 (553.5 MB) coreml model seems to run fine in this setup for myself on an iPhone 7 plus. Apple does seem to have been tweaking their ARKit and CoreML code though.

Would you mind sharing more details? If you can provide a link to your model, a description of how to test it (what objects it recognises), what versions of XCode you're using, what devices you've tested on. I could give it a try. (Also if you could upload a gif/video of the issue that'd be great!) I suspect though that this might be something on the Apple end at the moment and all we can do is compress our models further.

Hi @hanleyweng
I'm using Xcode Version 9.0.1 (9A1004) iPhone 5se and 7 plus. IOS 11.1
Please give me your email I would like to share you my Model. For testing, just add my model to your project and make any CoreML perform with handler and observing the camera hiccups. Don't need to do anything in completed callback yet.

I see. That's good to know. P.S. iOS 11.1 is currently beta software, so it'll also be good to know Which Beta version you're running.
You can find my email on the site listed in my github profile.


UPDATE:

It appears LARGE MLModels are having issues in iOS 11.1 betas.
Both @oishi89 's YOLO-inspired mlmodel (200mb), and Apple's VGG16 (554mb) mlmodel are having issues with iOS 11.1 betas (tested on beta 4, beta 5). It slows / hiccups the ARKit camera whenever its run.

From memory, this wasn't an issue in earlier iOS 11.0 beta experiments (with VGG16).

I suspect this is due to Apple tweaking CoreML / ARKit. Perhaps in regards to CPU / GPU / Threading. Maybe there'll be a fix in the future. Currently just speculating.

Unfortunately, for now, it seems that all we can do is compress our Neural Nets / CoreML Models. Models like Inception v3 appear to run ok in realtime, at ~ 100 mb in size.