Pinned issues
Issues
- 0
问一下, 在armv8.2 cpu 手机上, 是否已支持fp16 进行卷积的推理阿?
#2888 opened by wegarwo - 6
Android平台下OpenCL使用setCacheFile(),模型的初始化时间未得到优化
#2852 opened by CloudGuardian - 1
编译测试工具报错阻塞
#2887 opened by huma8848888 - 0
编译示例工程代码报错
#2886 opened by huma8848888 - 0
fastOnnxTest 成功 但使用时输出不一致
#2885 opened by anliyuan - 1
求助!!!!!!!!打印 input_tensor内容全部都是0
#2884 opened by anliyuan - 0
量化后输出怎么从float改成int8
#2883 opened by taijung - 0
有转换mobilenet V3成功的老铁吗?转为mnn之后准确率下降很多
#2882 opened by huma8848888 - 2
- 2
onnx转mnn之后模型推理和原模型差距太大
#2880 opened by huma8848888 - 2
手机端OpenCL 在Normal和Low精度下图像推理结果错误(全黑)
#2872 opened by CloudGuardian - 2
phi2 使用 llm_demo 推理报错
#2881 opened by quinlan-w - 3
MNN1.2 MNN_CODEGEN_REGISTER是什么?可以关了吗?
#2860 opened by xin486946 - 2
meta-llama3-8b-instruct 使用llm_demo 推理报错
#2879 opened by quinlan-w - 2
Cut MNN build size according to operators to use
#2877 opened by kushalpatil07 - 11
MNN模型resizeSession之后,推理结果出现较大误差
#2871 opened by younghuvee - 1
添加完Mutlihead attention 算子后,报错Reshape41 算子异常
#2878 opened by niucheney - 1
MNN Llama-3-8B-Instruct export 失败
#2876 opened by quinlan-w - 6
手机端高用vulkan失败,不是代码问题,也找到了库文件,也不是库文件的问题,ncnn可以正常使用vulkan
#2841 opened by deahhh - 6
pymnn inference quality is unstable
#2867 opened by kmn1024 - 2
Does MNN supports intra_op_num_threads to configure number of physical cores? numThread is not helping in this.
#2858 opened by avinash31d - 1
MNN推理的内存/cpu使用率
#2870 opened by jamesdod - 1
部署在安卓端的多分类分割任务,为什么输出的是一个一维的元组呀,有没有什么好方法能够实现分割结果的mask
#2868 opened by MrBroWinter - 1
OpenCLBackend.cpp里调用createOpenCLSymbolsOperatorSingleInstance()生成了gInstance,在哪里释放
#2869 opened by xxxxxxLD - 0
环境变量设置推理线程数
#2864 opened by Zzzz1111 - 3
convert onnx model of decode part of SAM failed.
#2832 opened by AminGeng - 1
- 2
Demo multiPose fail, report inputChannel: 225, batch:1, width:225, height:3. Input data channel may be mismatch with filter channel count
#2859 opened by bin913 - 0
从.onnx转成.mnn之后不能训练
#2862 opened by teslanow - 2
鲲鹏920环境session推理采用fp16推理结果错误
#2855 opened by zhenjing - 2
鲲鹏920环境Benchmark跑量化模型crash
#2856 opened by zhenjing - 1
纯GPU网络推理耗时
#2853 opened by danpeng2 - 4
多session时(多算法) CPU计算场景,内部线程池性能比openMP线程池差50%
#2854 opened by zhenjing - 5
- 3
请教关于mnncompress的使用问题
#2847 opened by stricklandye - 2
MNN 在 iOS平台上编译mnn_2.8.0_ios_llm.zip release版本的方法
#2845 opened by xingjinglu - 1
Android使用Vulkan推理模型,好像依然是跑在CPU上
#2846 opened by Jverson - 3
几乎必现的heap-use-after-free
#2848 opened by VincentZhaoBing - 1
鲲鹏920支持fp16,yolov8模型使用fp16模型相比fp32速度没提升
#2851 opened by zhenjing - 3
【Mac上运行llm_demo报错:./llm_demo model/qwen-1.8b-int4
#2834 opened by luocf - 0
debian系统下mnnquant量化出错:
#2849 opened by LimingA1 - 3
- 0
请问mnn框架在英伟达的orinX上可以用吗(使用GPU作为backend)?
#2844 opened by Moxoo - 8
mnn推断比pytorch推断耗时长
#2835 opened by jtyan123 - 3
Error for concat size of op
#2838 opened by jtyan123 - 10
- 2
AMD64 上运行正常的程序到Arm64上出现段错误,出现在创建session时
#2840 opened by deahhh - 2
- 2
windows下是否支持GPU推理?
#2833 opened by puxuntu - 1
TorchScript to MNN转换出错
#2830 opened by puxuntu