larq/compute-engine

Fail to build Android App

LJHG opened this issue · 6 comments

LJHG commented

Hi, I am trying to go through the turorial here about builing an Android App. And I tried the method of @tehtea here, but I guess it does not work for me though.

about the bug

I succeed to build on Android Studio, but fail to run, and it reports like this at the running stage:

E/tensorflow: ClassifierActivity: Failed to create classifier.
    java.lang.IllegalArgumentException: Failed to load XNNPACK delegate from current runtime. Have you added the necessary dependencies?
E/tensorflow: CameraActivity: Exception!
    java.lang.NullPointerException: Attempt to invoke virtual method 'void android.graphics.Bitmap.setPixels(int[], int, int, int, int, int, int)' on a null object reference

reproduce

And here's what I did all the way:

  1. download the examples from here.
  2. Go to Build > Select Build Variant and select supportDebug for TFLite_Image_Classification_Demo_App.app and otheres for debug to use the solution of lib_support instead of lib_task_api.
  3. Download the aar file here, and excute the command
    to install it in my local maven.
mvn install:install-file \
    -Dfile=lce-lite-v0.5.0.aar \
    -DgroupId=org.larq \
    -DartifactId=lce-lite -Dversion=0.1.000 -Dpackaging=aar
  1. add mavenLocal() in the build.gradle file under Android folder, and it looks like this.
allprojects {
    repositories {
        mavenLocal()
        google()
        jcenter()
    }
}
  1. Follow the steps mentioned by @tehtea here to modify build.gradle under lib_supoort folder , However, I use localMaven for the lce.
    and it finally looks like this
dependencies {
    implementation fileTree(dir: 'libs', include: ['*.jar'])
    implementation project(":models")
    implementation 'androidx.appcompat:appcompat:1.1.0'
    implementation 'org.larq:lce-lite:0.1.000' 

    // Build off of nightly TensorFlow Lite
    //implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'
    implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly-SNAPSHOT'


    //implementation 'org.tensorflow:tensorflow-lite-support:0.1.0'
    implementation ('org.tensorflow:tensorflow-lite-support:0.1.0') {
        exclude module: 'tensorflow-lite'
    }
    // Use local TensorFlow library
    // implementation 'org.tensorflow:tensorflow-lite-local:0.0.0'
}
  1. Build and run.

PS: The app may quit unexpectedly when running on real device.

I'm new to Android, so if there's anything that written ambiguous, feel free to reply!
Any help would be appreciated!
Thanks!

Hi @LJHG,
Great to hear that you are working with Larq Compute Engine!
Apologies for the delay, our team has been very busy. Unfortunately we have not looked at the Android app in a while and we don't have Android Studio set up at the moment; we only use the benchmark CLI tool. We hope to look into it in a few weeks time.

@tehtea is this issue something you've encountered and can help with?

LJHG commented

@Tombana , Thanks for the reply !
I'm trying to get a model runing on android. If the process mentioned above is not working, is it possible to write C++ code to do the inference?
If so, how can I manage to do that?

That is definitely possible, and we might be able to offer more help with that than with the Android Studio app.

As a first test you can try to run our example pre-built benchmarking C++ program: https://docs.larq.dev/compute-engine/benchmark/

If that works, you can try to compile your own program. For that, I recommend following the steps here: https://docs.larq.dev/compute-engine/build/android/
That guide shows you how to build this C++ program for Android: https://github.com/larq/compute-engine/blob/master/examples/lce_minimal.cc

Let me know if you run into any issues!

LJHG commented

@Tombana Thank you very much for the detailed reply :) !
I have managed to build and run the C++ program according to https://github.com/larq/compute-engine/blob/master/examples/lce_minimal.cc on my android phone.
And here's my puzzles:
In this program, I manually set the input to specific numbers like this

float* input = interpreter->typed_input_tensor<float>(0);
  // suppose the input size is 224*224*3
  // so lets set them all to 1,2,3
  // 224 * 224 = 50176
  for(int i=0;i<50176;i++)
  {
    input[i*3 + 0] = 1;
    input[i*3 + 1] = 2;
    input[i*3 + 2] = 3;
  }

When running on an android device, how can I feed the input data to the this C++ program?
Do I need to use opencv in this C++ program to read image captured by the camera, or I can use JNI to call this C++ program(This may sound inappropriate because I don't know much about JNI) ?

When running on an android device, how can I feed the input data to the this C++ program?
Do I need to use opencv in this C++ program to read image captured by the camera, or I can use JNI to call this C++ program(This may sound inappropriate because I don't know much about JNI) ?

Both options sound good to me. I don't have much experience with Android/Java, and none with JNI, so I'm afraid I can't offer help with that.

We have just updated our documentation with among others a fix for the XNNPack issue, see here for details.

I think all the issues reported here are now resolved, so I'm closing this. Feel free to re-open if you think otherwise - or open a new issue if you have encountered any other problems with LCE on Android.