ParisNeo/lollms-webui

When using 'run.bat' an error is shown regarding the 'gpt4all-lora-quantized-ggml.bin' file being 'invalid model file'

DigitalRonin3000 opened this issue · 4 comments

Expected Behavior

When using 'run.bat' on Windows 10 machine, the previously downloaded model should be recognized as valid.

Current Behavior

When using 'run.bat' an error is shown regarding the 'gpt4all-lora-quantized-ggml.bin' being 'invalid model file'. Although the newest model file has been downloaded the same day (during the installation of GPT4ALL-UI).

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. In command prompt fill in 'run.bat'.
  2. GPT4ALL will try to run, but during the step where the model is loaded and error is shown about the fact that the model is invalid. However the newest model was downloaded and placed in the map 'models' just before running this file.
  3. The error shown is:
    "Checking discussions database...
    Ok
    llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
    ./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
    you most likely need to regenerate your ggml files
    the benefit is you'll get 10-100x faster load times
    see ggerganov/llama.cpp#91
    use convert-pth-to-ggml.py to regenerate from original pth
    use migrate-ggml-2023-03-30-pr613.py if you deleted originals
    llama_init_from_file: failed to load model
    llama_generate: seed = 1680983129

system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |"

Screenshots

See attached screenshot for an example.
image

uninstall gpt4all from uninstall.bat
download the repo as zip and replace the files
run the install.bat

uninstall gpt4all from uninstall.bat download the repo as zip and replace the files run the install.bat

I followed your steps. However 'uninstall.bat' only uninstalls the virtual environment. GPT4ALL-UI folder remains. So I deleted it manually and installed a fresh copy from repository (downloaded a Zip as you said).
I still get exact same error :/

The problem is with this line in the setup script:
if command -v git > /dev/null 2>&1; then
echo "OK"

Asked ChatGPT about it and it seams the -v command isn´t suitable for windows. Mostly used in Unix systems.

Got it working by manual skipping that step and doing this:
if not exist tmp/llama.cpp git clone https://github.com/ggerganov/llama.cpp.git tmp\llama.cpp
move models\gpt4all-lora-quantized-ggml.bin models\gpt4all-lora-quantized-ggml.bin.original
python tmp\llama.cpp\migrate-ggml-2023-03-30-pr613.py models\gpt4all-lora-quantized-ggml.bin.original models\gpt4all-lora-quantized-ggml.bin

After that the run batch worked fine.

If this pull request gets accepted, all you have to do is to convert the ggml model with install.bat, and then go to run.bat normally