alibaba/MNN

LaMa Inpainting Model outputs 0 when running with GPU(Metal)

K-prog opened this issue · 3 comments

平台(如果交叉编译请再附上交叉编译目标平台):

Platform(Include target platform as well if cross-compiling):

MacOS Sonoma 14.5 ( M1 Pro )

Github版本:

Github Version:

直接下载ZIP包请提供下载日期以及压缩包注释里的git版本(可通过7z l zip包路径命令并在输出信息中搜索Comment 获得,形如Comment = bc80b11110cd440aacdabbf59658d630527a7f2b)。 git clone请提供 git commit 第一行的commit id
Commit = e1011161ed0382e1a33a65bfdde8bee931dbcfaf
Provide date (or better yet, git revision from the comment section of the zip. Obtainable using 7z l PATH/TO/ZIP and search for Comment in the output) if downloading source as zip,otherwise provide the first commit id from the output of git commit

编译方式:

Compiling Method

cmake

请在这里粘贴cmake参数或使用的cmake脚本路径以及完整输出
Paste cmake arguments or path of the build script used here as well as the full log of the cmake proess here or pastebin

https://github.com/uttarayan21/mnn-nix-overlay/blob/master/mnn.nix
Used cmake arguments

-DMNN_USE_SYSTEM_LIB=OFF -DMNN_BUILD_SHARED_LIBS=OFF -DMNN_SEP_BUILD=OFF -DMNN_BUILD_TOOLS=OFF -DMNN_PORTABLE_BUILD=ON -DMNN_METAL=ON

编译日志:

Build Log:

粘贴在这里
Paste log here or pastebin

build.log file was not generated.
image

I am Trying to implement the LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions model on MNN, this runs properly on CPU backend but when selected METAL backend the output consists of all zeros. The original model is in pytorch so I used the ONNX implementation of it via this export notebook, both fp16 and fp32 versions of the model give 0 output on METAL

MNN Convert Command used:

./MNNConvert -f ONNX --modelFile lama_fp32.onnx --MNNModel lamafp16.mnn --fp16 --bizCode MNN --debug
image
Converted from a different machine(Linux)

GPU(Metal) Output:

image

CPU Output

image

Source Code and Model Files-> link

Code Snippet

image

P.s any other model runs fine on GPU(Metal), the problem is with this one

Any Help Would be Appreciated :)