google-ai-edge/mediapipe

The LlmInference model is not closing

Opened this issue · 3 comments

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

Yes

OS Platform and Distribution

Android 14

Mobile device if the issue happens on mobile device

QCOM ADP 8155

Browser and version if the issue happens on browser

No response

Programming Language and version

Kotlin/Java

MediaPipe version

0.10.18

Bazel version

No response

Solution

llmInference

Android Studio, NDK, SDK versions (if issue is related to building in Android environment)

Android Studio Koala | 2024.1.1

Xcode & Tulsi version (if issue is related to building for iOS)

No response

Describe the actual behavior

I call close function in LlmInference, but this is not close immediately

Describe the expected behaviour

Closes and cleans up the LlmInference Model immediately

Standalone code/steps you may have used to try to get what you need

inferenceModel.close()

Other info / Complete Logs

I'm modifying the LlmInference example app of Mediapipe for testing. 
I needed to interrupt the llm inference, so I called close in LlmInference.java, but it doesn't terminate immediately. 
When using the Gemma 2B model, it terminated right away, but with the Gemma 7B model, it takes more than 2-3 minutes to stop. 
Are you planning to modify it so that larger models can also terminate immediately? 
If there are other methods, please let me know.

Hi @moon5bal,

Thank you for your observation. Could you please let us know which physical device you are using for this implementation? Additionally, could you try this on another physical device to check if the behavior is the same? This information will help us investigate the issue further and discuss potential fixes for Gemma 7B and other large models we plan to support in the future.

Hi, @kuaashish
I'm using Qualcomm ADP 8155 board. Below is board spec.
https://www.lantronix.com/products/sa8155p-automotive-development-platform/#product-specifications

  • Snapdragon SA8155P
  • Qualcomm Custom 64-bit Kryo octa-core CPU
  • Qualcomm Adreno™ 640 GPU
  • Qualcomm® Hexagon™ 696 DSP
  • LPDDR4X DRAM – 12GB
  • 128GB UFS

I only have the ADP8155 device right now, so I can't check it immediately. If I get other devices, I'll test it.
Also, I think there are performance issues depending on the model,
but it seems that while the Java side is cleaned up when calling LlmInference.close,
the native side is not being cleaned up.

@kuaashish I’m encountering the same issue. I have tested the LLM interface on the following devices:

  1. Google Pixel 9 Pro XL
  2. OnePlus ACE2 Pro
  3. OnePlus 9

This issue also occurs with version 0.10.16. And, different with @moon5bal , I test with Gemma2 2B.

I’m not using the example app but rather the exact same code and flow from that app. In the example, the close() method isn’t called because there’s no need to.

However, in my app, I need to close the LLM to either interrupt the inference or change certain parameters to recreate an LLM instance. I’ve tested two approaches:

  1. Not calling close()
    I believe this is incorrect and could lead to a memory leak. In version 0.10.16, the app crashes after recreating the instance five or six times. However, in version 0.10.18, it crashes after only two recreations.

  2. Calling close()
    If I call close(), the app crashes immediately.

This is a crash log on Pixel 9 Pro XL with 0.10.16.
llm_crash_0.10.16.txt