"could not load model from given file path" issue caused by jllama.dll
mymagicpower opened this issue · 2 comments
For windows GPU build, I found that below issue "could not load model from given file path" is caused by jllama.dll.
If I replace GPU build jllama.dll with CPU build jllama.dll, it's working fine for GPU.
Error log:
"D:\system\Program Files\Java\jdk-11.0.16.1\bin\java.exe" "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA 2023.1\lib\idea_rt.jar=57690:C:\Program Files\JetBrains\IntelliJ IDEA 2023.1\bin" -Dfile.encoding=UTF-8 -classpath F:\LLM\java-llama.cpp\target\test-classes;F:\LLM\java-llama.cpp\target\classes;C:\Users\admin.m2\repository\junit\junit\4.13.1\junit-4.13.1.jar;C:\Users\admin.m2\repository\org\hamcrest\hamcrest-core\1.3\hamcrest-core-1.3.jar;C:\Users\admin.m2\repository\org\jetbrains\annotations\24.0.1\annotations-24.0.1.jar examples.MainExample
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6
error loading model: failed to open unknown: No such file or directory
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'unknown'
Exception in thread "main" de.kherud.llama.LlamaException: could not load model from given file path
at de.kherud.llama.LlamaModel.loadModel(Native Method)
at de.kherud.llama.LlamaModel.(LlamaModel.java:54)
at examples.MainExample.main(MainExample.java:33)
Process finished with exit code 1
Thanks for reporting this 👍 I fixed the "unknown" model problem (version 2.0.1). Unfortunately I still get a cuda error:
dotnetGGML_ASSERT: java-llama.cpp/src/main/cpp/llama.cpp/ggml-cuda.cu:6596: src0->type == GGML_TYPE_F16
But I'm not sure if this is because my cuda installation is broken. I am curious if it works for you now, please report back otherwise.
Hi, sorry for the late update, but I think everything should be fixed now. There was a problem with the cmake file, some libraries were not properly linked. Feel free to re-open if you still experience any problems.