Error trying to load on windows
iHaagcom opened this issue · 10 comments
What version of windows and which models are you using?
What version of windows and which models are you using?
Using the model 13B listed in the readme. Running server 2022 and Windows 10, both machines give the same error. We run the python script in the same directory as main.exe yeah?
Interesting, I haven't encountered that error before running minigpt4.cpp on windows.
The python script does not have to be moved anywhere. As long as minigpt.dll
is somewhere in the base folder or minigpt4
folder, it should work. You can obtain minigpt4.dll by downloading it from releases or by compiling the repo yourself
Also did you use the recommended models listed in the minigpt4.cpp README? (Section 3 option 1 and Section 4 option 1)
Interesting, I haven't encountered that error before running minigpt4.cpp on windows.
The python script does not have to be moved anywhere. As long as
minigpt.dll
is somewhere in the base folder orminigpt4
folder, it should work. You can obtain minigpt4.dll by downloading it from releases or by compiling the repo yourselfAlso did you use the recommended models listed in the minigpt4.cpp README? (Section 3 option 1 and Section 4 option 1)
Yes followed the guide. That’s the error trying to run it after compiling.
Unfortunately, I'm not able to reproduce your results on my end... This is the output from my computer running windows 10.
(llm) C:\Users\maknee\Desktop\minigpt4.cpp\minigpt4>python minigpt4_library.py minigpt4-13B-f16.bin ggml-vicuna-13B-v0-q5_k.bin
Loading minigpt4 shared library...
Loaded library <__main__.MiniGPT4SharedLibrary object at 0x0000020040E8FB50>
llama.cpp: loading model from ggml-vicuna-13B-v0-q5_k.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 17 (mostly Q5_K - Medium)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 0.09 MB
llama_model_load_internal: mem required = 11014.53 MB (+ 1608.00 MB per state)
llama_new_context_with_model: kv self size = 1600.00 MB
INFO: LLM model init took 711 ms to complete
INFO: Model name: visual_encoder
INFO: Model name: ln_vision
INFO: Model name: query_tokens
INFO: Model name: Qformer
INFO: Model name: llama_proj
INFO: Load file took 158 ms to complete
INFO: Model type: Vicuna13B
INFO: Model size: 2090.87939453125 MB
INFO: Loading minigpt4 model took 0 ms to complete
INFO: Load model from file took 1151 ms to complete
INFO: Compute buffer uses 4.3424224853515625 MB
INFO: Scratch buffer uses 2814.200241088867 MB
INFO: Encoding image took 2330 ms to complete
The text in the picture is "lama"
The color of the text in the picture is not specified.
My other suggestion is to download minigpt4.dll
from releases and try using that, or compiling minigpt4.cpp
without any extra features such as avx
In addition, could you send your version of minigpt4.dll
and I could check if the problem is reproducible from my end.
Could you compile the exe and upload it?
Here is the uploaded result in releases
Download the appropriate version for windows. The ones withopencv
may require you to install opencv.Thank you not sure why I couldn’t compile it.. Missing dependancys?
You need to install git under Section 2 Option 2 Requirements
Also unable to run the webui but exe works. Failed to detect the real stuff from the image with the image I gave it tho.
The MiniGPT4 model itself not perfect - In addition, the quantization of models affects how well minigpt4.cpp
performs.
What version of python are currently using? I have tested on python 3.10
and probably the syntax doesn't work for older python versions.
I'll fix this. Pull the latest on release page and let me know if it works.