pytorch/ios-demo-app

ImageSegmentation Build Failure on Mac M1

ZwwWayne opened this issue · 19 comments

Hi, I met the compilation issue on Mac M1 as below

ld: in /Users/xxx/projects/ios-demo-app/ImageSegmentation/Pods/LibTorch/install/lib/libtorch.a(empty.cpp.o),
building for iOS Simulator, but linking in object file built for iOS,
file '/Users/xxx/projects/ios-demo-app/ImageSegmentation/Pods/LibTorch/install/lib/libtorch.a' for architecture arm64

The environment information is as below:
PyTorch: 1.8.1
Output of pod --version: 1.10.1
Cmake version: 3.11.3

I also tried to modify the Podfile from pod 'LibTorch', '~>1.7.0' to pod 'LibTorch', '~>1.8.0', but the failure still exists.

did anyone find resolution for this issue ?

simple answer is add this to Podfile:

post_install do |installer|
  installer.pods_project.targets.each do |target|
    target.build_configurations.each do |config|
      config.build_settings['EXCLUDED_ARCHS[sdk=iphonesimulator*]'] = 'arm64'
      config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '10.0'
      config.build_settings['ENABLE_BITCODE'] = 'NO'
    end
  end
end

@darkThanBlack
hi, i added this code but it still not working, i am using Macbook pro M1 pro

platform :ios, '12.0'

target 'PTmodelTest' do
pod 'LibTorch', '~>1.7.0'
end

post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['EXCLUDED_ARCHS[sdk=iphonesimulator*]'] = 'arm64'
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '10.0'
config.build_settings['ENABLE_BITCODE'] = 'NO'
end
end
end

did anyone find resolution
Thank you

@kenza-djeddiali Can you paste your Xcode Error info or ScreenShot for me?

hi @darkThanBlack , thank you for your feedback
there it is :
image

I changed parameters yesterday but I have a new error [ https://www.devfaq.fr/question/xcode-12-construction-pour-ios-simulator-mais-liaison-dans-un-fichier-objet-construit-pour-ios-pour-l-39-architecture-arm64]
image
this:
image

**
what i want , develop my detection model and integrate it into the app, i tried several options (create ML works super cool), convert pytorch to pt done (so i want to test it with pytorch mobile ), but pytorch to coreml no! tensorflow lite same problem with pod installation

// toto that I consulted: https://youtu.be/ca4RGvIY5cc, https://youtu.be/amTepUIR93k
**
thank you, have a nice day 🤗

@kenza-djeddiali

  • Try #import <Libtorch-Lite/Libtorch-Lite.h>
  • As a beginner, I made the same mistakes. My immediate conclusion is, don't try to convert machine learning models to each other. As iOS developers, We might habitually think of a model as a special kind of file or folder that can be interpreted and executed by multiple frameworks. Yes, yes, they look like they have really nice conversion tools.
    But in fact, in order to run well on a commercial-grade app, roughly we have to ensure that
  1. Input data format;
  2. Algorithm interface;
  3. Model file;
  4. The machine learning framework corresponding to the model;
  5. Output data analysis algorithm;
  6. etc.

It all depends on the algorithm engineer or the paper.
The reason is simple: the so-called framework is just a simple literal translation of the mathematical formulas and algorithms in the paper.
For example, in the pyTorch demo, we need to call the forward() method. but why? What is forward? What does forward mean?
There is also a size parameter: {1, 3, 224, 224}. but why? Why does the image size need to be 224? Can it be changed to 200? The answer is no. Even if it is changed to 223, the output data will be very different. All the parameters involved in this whole process are all damn BLACK MAGIC. PLEASE CARVE THIS POINT INTO YOUR DNA.
If we don't understand the whole process of machine learning, the only way is to ask the person who made the model and let him tell us how to set the specific parameters.

Hi @darkThanBlack, thank you for your feedback, it's a real headache, I'm back to error1 🤣.
(in /Users/ken/ProjetsIOS/PTmodelTest/Pods/LibTorch-Lite/install/lib/libtorch.a(empty.cpp.o), building for iOS Simulator, but linking in object file built for iOS, file '/Users/ken/ProjetsIOS/PTmodelTest/Pods/LibTorch-Lite/install/lib/libtorch.a' for architecture arm64)
I tried many many things, but I’m stuck (// ex. https://discuss.pytorch.org/t/cant-use-ios-libtorch-as-a-dependency-for-a-library-using-cocoapods/75544/5)

Actually it looks easy, convert ... but basically here is what happens, I exhausted all my cards lol, maybe switch to tensorflow lite... I couldn't run tensorflow model at the front on my M1 pro machine ! we'll see what I have to find!

thank you @darkThanBlack

@kenza-djeddiali Emm...Have you heard lipo -info? You can use it to check actual support in libtorch.a;Currect output might like this image, include i386 armv7 x86_64 arm64:
image

If not, I suggest you to give up using cocoaPods with PyTorch cause they didn't create .a file correctly, try custom build PyTorch by yourself, you can find the chapter of the same name in the pytorch guide, create your .a file and drag it info project. I will consider uploading my demo if I have time later.

@darkThanBlack hi, i'm going to see that, actually file a is in red, on the forums a solution is to delete it, but i got error 1 back.
Thank you, I would like to have code that works 😅

@darkThanBlack can you explain how your solutions works?

Demo project uploaded, LINK
@denfromufa
@kenza-djeddiali

I used the pods from @darkThanBlack 's helloworld app, (pytorch-lite 1.9.0) and built on a physical device using Xcode and it worked.

I'm getting the same error on https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld.

ld: in /Users/steve.ham/Downloads/ios-demo-app-master/HelloWorld/HelloWorld/Pods/LibTorch-Lite/install/lib/libtorch.a(empty.cpp.o), building for iOS Simulator, but linking in object file built for iOS, file '/Users/steve.ham/Downloads/ios-demo-app-master/HelloWorld/HelloWorld/Pods/LibTorch-Lite/install/lib/libtorch.a' for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Chip: Apple M1 Pro
Pod: 'LibTorch-Lite', '~> 1.10.0'
Device:

  1. Work on device iPhone 12 Pro.
  2. Doesn't work on iOS Simulator iPhone 13 mini.

Is any LibTorch developers trying to fix this?

Strangely if I install 'LibTorch-Lite' with 'OpenCV' the project runs on simulator too.

platform :ios, '12.0'
target 'HelloWorld' do
    pod 'LibTorch-Lite'
    pod 'OpenCV'
end

@steve-ham does this actually work? I tried it and it fixes one compile error but causes another. I looked closer at the OpenCV pod and it's quite outdated and sets these excluded architectures for simulators which could just be masking the actual issue: https://github.com/CocoaPods/Specs/blob/master/Specs/5/b/d/OpenCV/4.3.0/OpenCV.podspec.json#L41

@mlynch My above Podfile script doesn't work anymore somehow, however If I add below config on App Project Build Settings it works (not Pod Build Settings):

  1. Tap 'HelloWorld' project on top left.
  2. Tap 'HelloWorld' under 'TARGETS'.
  3. Add below config under 'Excluded Architectures'
    Debug
    Any iOS Simulator SDK arm64
    Release
    Any iOS Simulator SDK arm64

@steve-ham After following all the steps you said above, now I got this error. Can you suggest any solution? Thanks.

error

Error: Target Integrity
The linked framework 'Pods_Labelling_Board.framework' is missing one or more architectures required by this target: x86_64.

Error Location:
image

I am using pod 'LibTorch', '~>1.10.0'

This problem still persists. If you include arm64 in the excluded architectures, at the time of running the app, the assets or model files don't get copied and you get the LLDB error. Has anyone solved this for iOS simulator yet ?