Error on inference
foromer4 opened this issue · 3 comments
HI Running the script from_mxnet.py on hikey 970 board with arm5 and MALI GPU, I get this error:
LLVM ERROR: Only small and large code models are allowed on AArch64
If I disable LLVM I get an error that it is disabled. any suggestions? thanks
can you trace the error with pdb to see which line of the code generates such error? on the other hand, I would suggest building independent tvm runtime on your host, and use rpc to communicate with your hikey 970 device, so that you only need to ensure opencl backend on your device is available.
thanks for the quick response, the error is in this line:
https://github.com/liangfu/mxnet-mobilenet-v2/blob/master/from_mxnet.py#75
compiling the graph.
Regarding your suggestion, On my host machine (ubuntu 16.04 , intel i7 cpu) the script also fails - because although I have opencl installed it can't find any opencl devices.
Also, since I want to measure performance on the hikey board , I am also afraid than RPC might not give accurate results (?)
i think the root cause of this issue might be your opencl driver, and a correct way to compile tvm with opencl, not quite related to running mobilenetv2 inference. therefore, i would recommend post any tvm related question on http://discuss.tvm.ai .