ParisNeo/lollms-webui

Run.bat stuck on logo - windows10

andzejsp opened this issue ยท 36 comments

Everything installed good, no problems, converted the model. then when i run run.bat its stuck on this:

E:\VBA_PROJECTS\Git\gpt4all-ui>echo off
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHH     .HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHH.     ,HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHH.##  HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH#.HHHHH/*,*,*,*,*,*,*,*,***,*,**#HHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHH.*,,***,***,***,***,***,***,*******HHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*,,,,,HHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH.,,,***,***,***,***,***,***,***,***,***,***/HHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH*,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHH#,***,***,***,***,***,***,***,***,***,***,***,**HHHHHHHHHHHHHHHHH
HHHHHHHHHH..HHH,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*#HHHHHHHHHHHHHHHH
HHHHHHH,,,**,/H*,***,***,***,,,*,***,***,***,**,,,**,***,***,***H,,*,***HHHHHHHH
HHHHHH.*,,,*,,,,,*,*,*,***#HHHHH.,,*,*,*,*,**/HHHHH.,*,*,*,*,*,*,*,*****HHHHHHHH
HHHHHH.*,***,*,*,***,***,.HHHHHHH/**,***,****HHHHHHH.***,***,***,*******HHHHHHHH
HHHHHH.,,,,,,,,,,,,,,,,,,,.HHHHH.,,,,,,,,,,,,.HHHHHH,,,,,,,,,,,,,,,,,***HHHHHHHH
HHHHHH.,,,,,,/H,,,**,***,***,,,*,***,***,***,**,,,,*,***,***,***H***,***HHHHHHHH
HHHHHHH.,,,,*.H,,,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,***H*,,,,/HHHHHHHHH
HHHHHHHHHHHHHHH*,***,***,**,,***,***,***,***,***,***,***,***,**.HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH,,,,,,,,*,,#H#,,,,,*,,,*,,,,,,,,*#H*,,,,,,,,,**HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH,,*,***,***,**/.HHHHHHHHHHHHH#*,,,*,***,***,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH**,***,***,***,***,***,***,***,***,***,***,*.HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH**,***,***,*******/..HHHHHHHHH.#/*,*,,,***,***HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH*,*,*,******#HHHHHHHHHHHHHHHHHHHHHHHHHHHH./**,,,.HHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH.,,*,***.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.*#HHHHHHHHHHHH
HHHHHHHHHHHHHHH/,,,*.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHH,,#HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHH.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Testing this on different pc.

Windows 10
CPU 3770 intel
GPU p2000 nvidia
RAM 32gb

on my other pc it ran no problem, differenc CPU RAM GPU and OS tho :)

Not sure how to trouble shoot this as it gives me no error.

With 32gb of ram this should run with no problem. I'm working on making this run on a raspberry pi, so a pc with 32giggs of ram is way over the top. It must be a configuration problem.
Can you try to do the commands manually?
Open the run.bat
Look at the code and run the environnement activation line then the python app.py line. Then report which one is ganging.

With 32gb of ram this should run with no problem. I'm working on making this run on a raspberry pi, so a pc with 32giggs of ram is way over the top. It must be a configuration problem. Can you try to do the commands manually? Open the run.bat Look at the code and run the environnement activation line then the python app.py line. Then report which one is ganging.

looks like policy problem.

PS E:\VBA_PROJECTS\Git\gpt4all-ui> env\Scripts\activate
env\Scripts\activate : File E:\VBA_PROJECTS\Git\gpt4all-ui\env\Scripts\Activate.ps1 cannot be loaded because running sc
ripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/
?LinkID=135170.
At line:1 char:1
+ env\Scripts\activate
+ ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : SecurityError: (:) [], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess

Is there a oneliner i could do in run.bat file to bypass this policy?

PowerShell -NoProfile -ExecutionPolicy Unrestricted -Scope Process

This seems to work for me to be able to run the venv but when i run python app.py nothing happens

(env) PS E:\VBA_PROJECTS\Git\gpt4all-ui> python app.py
(env) PS E:\VBA_PROJECTS\Git\gpt4all-ui>

EDIT:

Tried running run.bat as administrator got this error:

HHHHHHHHHHHH.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
The system cannot find the path specified.
python: can't open file 'C:\\Windows\\system32\\app.py': [Errno 2] No such file or directory

You should use cmd not powershell but i think this is already the case.
It seems to be a permission problem.
Are you sure when you use administrative access you are in the right directory? The output suggests you are not on the project path.
Got to go. I'll be back this evening european time. Hope you fix the problem.

You should use cmd not powershell but i think this is already the case. It seems to be a permission problem. Are you sure when you use administrative access you are in the right directory? The output suggests you are not on the project path. Got to go. I'll be back this evening european time. Hope you fix the problem.

all i did is use mouse right click on the run.bat and chose run as administrator.

Something is wrong i guess with windows setup and how it has permission set or what not.

This pc ir corporate, but not very strict, so idk who did what and when. But other users might run into similar situation as i do.

Even manually opening the cmd as admin, then cd into that gpt repo older and run bat from cmd hangs on the same logo screen.

Not sure how to debug this as it gives no error. :(

Corporate pcs are generally difficukt to use when you need some priveleges.
Can you try to activate the environnement then do the python app.py?

like i wrote in few comments above in powershell it gave me this:

(env) PS E:\VBA_PROJECTS\Git\gpt4all-ui> python app.py
(env) PS E:\VBA_PROJECTS\Git\gpt4all-ui>

Same error here, I can't run it, shows logo and then crash silenty.

My current setup:

  • AMD FX8350
  • 32gb of RAM
  • Nvidia 3060 12GB
  • Windows 10

Maybe realted with CPU that don't support AVX2 instructions?

I think you need to send this to the people working on llamacpp project. I can't help much for that unfortunately as my ui depends heavily on their upgrades. All low level devs need to be addressed to them.

in my case the CPU does support AVX2, atleast the google says so. I had trouble with VM when i didnt have the CPU set to host CPU, then i could not run the app - illegal instruction, now it runs in VM.

Hello again, so i did some looking up. And the pc im tryin to run on has no AVX2 support. Can it be somehow ran on CPU without that AVX2?
image

Hello,

I manage to build llama.cpp without AVX2 support, but honestly, it runs very slow.

If you like to try it, you can build it with -DLLAMA_AVX2=0 as cmake argument.

But as I say before, don't expect so much speed.

On our cases, I think that the best way its delegate AI to GPU instead CPU.

Hello,

I manage to build llama.cpp without AVX2 support, but honestly, it runs very slow.

If you like to try it, you can build it with -DLLAMA_AVX2=0 as cmake argument.

But as I say before, don't expect so much speed.

On our cases, I think that the best way its delegate AI to GPU instead CPU.

Where do i put that argument? ?
Last time i heard that GPU is not yet supported. If it is, how do you delegate? :) im noob :)

You should put this argument on cmake comand, before run cmake --build like this:

cd <path_to_llama_folder>
mkdir build
cd build
cmake .. -DLLAMA_AVX2=0
cmake --build . --config Release

Remember that llama.cpp it's on other repository: https://github.com/ggerganov/llama.cpp

I manage to run it on CPU but as you said, GPU isn't supported for now for llama project.

You should put this argument on cmake comand, before run cmake --build like this:

cd <path_to_llama_folder>
mkdir build
cd build
cmake .. -DLLAMA_AVX2=0
cmake --build . --config Release

Remember that llama.cpp it's on other repository: https://github.com/ggerganov/llama.cpp

I manage to run it on CPU but as you said, GPU isn't supported for now for llama project.

And after i build it, whats next? Do i need to copy some files somewhere? Or will it build on PC somewhere and will be accessible by GPT4ALL-ui?

Didn't know where GPT4ALL-ui place the binaries, I just run it from CMD to test if work, but if you know where it's placed, should work just replacing the executable with the llama.cpp built without AVX2 support.

Maybe @ParisNeo can help us with this part.

Didn't know where GPT4ALL-ui place the binaries, I just run it from CMD to test if work, but if you know where it's placed, should work just replacing the executable with the llama.cpp built without AVX2 support.

Maybe @ParisNeo can help us with this part.

Well i think this repo uses pyllama https://github.com/nomic-ai/pyllamacpp#installation

I tried to activate environment, then followed the instruction as per link above pip install . It installed everything no errors but still when i run it it hangs on the GPT4ALL logo

This repository have AVX2 enabled by default on cmake file:

https://github.com/nomic-ai/pyllamacpp/blob/main/CMakeLists.txt#L70

Try to put -DLLAMA_AVX2=0 argument on:

https://github.com/nomic-ai/pyllamacpp/blob/main/setup.py#L52

Like this:

    build_args = [
      f"-DLLAMA_AVX2=0"
    ]

This should build llama binaries without AVX2 support. Didn't know if the initial f are necessary, I don't have so much experience with python build enviroment.

Almost, just this error: Thank you for helping me:

 -- Build files have been written to: E:/VBA_PROJECTS/Git/gpt4all-ui/pyllamacpp/build/temp.win-amd64-cpython-310/Release/_pyllamacpp
      Unknown argument -DLLAMA_AVX2=0
      Usage: cmake --build <dir>             [options] [-- [native-options]]
             cmake --build --preset <preset> [options] [-- [native-options]]
      Options:
        <dir>          = Project binary directory to be built.
        --preset <preset>, --preset=<preset>
                       = Specify a build preset.
        --list-presets[=<type>]
                       = List available build presets.
        --parallel [<jobs>], -j [<jobs>]
                       = Build in parallel using the given number of jobs.
                         If <jobs> is omitted the native build tool's
                         default number is used.
                         The CMAKE_BUILD_PARALLEL_LEVEL environment variable
                         specifies a default parallel level when this option
                         is not given.
        -t <tgt>..., --target <tgt>...
                       = Build <tgt> instead of default targets.
        --config <cfg> = For multi-configuration tools, choose <cfg>.
        --clean-first  = Build target 'clean' first, then build.
                         (To clean only, use --target 'clean'.)
        --resolve-package-references={on|only|off}
                       = Restore/resolve package references during build.
        -v, --verbose  = Enable verbose output - if supported - including
                         the build commands to be executed.
        --             = Pass remaining options to the native tool.
      Traceback (most recent call last):
        File "E:\VBA_PROJECTS\Git\gpt4all-ui\env\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
          main()
        File "E:\VBA_PROJECTS\Git\gpt4all-ui\env\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "E:\VBA_PROJECTS\Git\gpt4all-ui\env\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 251, in build_wheel
          return _build_backend().build_wheel(wheel_directory, config_settings,
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\build_meta.py", line 413, in build_wheel
          return self._build_with_temp_dir(['bdist_wheel'], '.whl',
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\build_meta.py", line 398, in _build_with_temp_dir
          self.run_setup()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
          exec(code, locals())
        File "<string>", line 134, in <module>
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\__init__.py", line 108, in setup
          return distutils.core.setup(**attrs)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
          return run_commands(dist)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
          dist.run_commands()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\dist.py", line 1221, in run_command
          super().run_command(command)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\wheel\bdist_wheel.py", line 343, in run
          self.run_command("build")
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\dist.py", line 1221, in run_command
          super().run_command(command)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\command\build.py", line 131, in run
          self.run_command(cmd_name)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\dist.py", line 1221, in run_command
          super().run_command(command)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\command\build_ext.py", line 84, in run
          _build_ext.run(self)
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 345, in run
          self.build_extensions()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 467, in build_extensions
          self._build_extensions_serial()
        File "C:\Users\andzejs\AppData\Local\Temp\pip-build-env-r2g4614w\overlay\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 493, in _build_extensions_serial
          self.build_extension(ext)
        File "<string>", line 123, in build_extension
        File "C:\Users\andzejs\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 526, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command '['cmake', '--build', '.', '-DLLAMA_AVX2=0', '--config', 'Release']' returned non-zero exit status 1.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for pyllamacpp
Failed to build pyllamacpp
ERROR: Could not build wheels for pyllamacpp, which is required to install pyproject.toml-based projects

I'm trying to build pyllamacpp, give me some minutes and I come back with the solution or atleast a workarround

My bad, the argument should be in cmake_args not build_args, after that, can build pyllamacpp without problems.

Another way to force the param, it's edit https://github.com/nomic-ai/pyllamacpp/blob/main/CMakeLists.txt#L70 and put it to OFF.

Now how I can use now the built pyllamacpp instead the comming from pip on gpt4all-ui?

My bad, the argument should be in cmake_args not build_args, after that, can build pyllamacpp without problems.

Another way to force the param, it's edit https://github.com/nomic-ai/pyllamacpp/blob/main/CMakeLists.txt#L70 and put it to OFF.

Now how I can use now the built pyllamacpp instead the comming from pip on gpt4all-ui?

thanks, build is done, and yes i would like to know that too

If any of you want to participate, fork the project, you can summerize this in an a md file and put it in doc folder then do a pull request and I'll accept to so that your experience may be shared with otherw clearly.
Thanks for testing.

If any of you want to participate, fork the project, you can summerize this in an a md file and put it in doc folder then do a pull request and I'll accept to so that your experience may be shared with otherw clearly. Thanks for testing.

will do once we get it working, right now we stuck on that it has been pip'ed (installed) somewhere, but not sure how can we point the gpt4all-ui to use this newly built pyllamacpp thing with avx2=0

I have moved one step closer:

All i did at first

Another way to force the param, it's edit https://github.com/nomic-ai/pyllamacpp/blob/main/CMakeLists.txt#L70 and put it to OFF.

That didnt work,

then i added a cmake arg: on line 47 of setup.py

cmake_args = [
            f"-DCMAKE_LIBRARY_OUTPUT_DIRECTORY={extdir}{os.sep}",
            f"-DPYTHON_EXECUTABLE={sys.executable}",
            f"-DCMAKE_BUILD_TYPE={cfg}", # not used on MSVC, but no harm
            f"-DLLAMA_AVX2=0",     ## <---------------------------------------      
        ]

And now im atleast at this stage:

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Checking discussions database...
Upgrading schema to version 2...
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
        you most likely need to regenerate your ggml files
        the benefit is you'll get 10-100x faster load times
        see https://github.com/ggerganov/llama.cpp/issues/91
        use convert-pth-to-ggml.py to regenerate from original pth
        use migrate-ggml-2023-03-30-pr613.py if you deleted originals
llama_init_from_file: failed to load model
 * Serving Flask app 'GPT4All-WebUI'
 * Debug mode: off
An attempt was made to access a socket in a way forbidden by its access permissions

One step forward, two steps back. Seems like converter is borked?
When runing install.bat and selecting model to convert, it cant find the converter.py script:

Skipping download of model file...

In order to make a model work, it needs to go through the LLaMA tokenizer, this will fix errors with the model in run.bat. Do you want to convert the model? [Y,N]?Y
[1] models\.keep
[2] models\gpt4all-lora-quantized-ggml.bin
[3] models\README.md
Enter the number of the model you want to convert: 2

You selected models\gpt4all-lora-quantized-ggml.bin

Do you want to convert the selected model to the new format? [Y,N]?Y

Converting the model to the new format...
Cloning into 'tmp\llama.cpp'...
remote: Enumerating objects: 1707, done.
remote: Counting objects: 100% (1707/1707), done.
remote: Compressing objects: 100% (620/620), done.
remote: Total 1707 (delta 1088), reused 1632 (delta 1053), pack-reused 0
Receiving objects: 100% (1707/1707), 1.86 MiB | 9.74 MiB/s, done.
Resolving deltas: 100% (1088/1088), done.
        1 file(s) moved.
C:\Users\andzejs\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'E:\\VBA_PROJECTS\\Git\\gpt4all-ui\\tmp\\llama.cpp\\migrate-ggml-2023-03-30-pr613.py': [Errno 2] No such file or directory

Error during model conversion. Restarting...
        1 file(s) moved.

In order to make a model work, it needs to go through the LLaMA tokenizer, this will fix errors with the model in run.bat. Do you want to convert the model? [Y,N]?

and i checked that folder there is no such file no more.

image

Well, as I see llamacpp now it's runing without problems, so we can confirm that it's because need disable AVX2 support on CPU's that don't support it.

Im thinking on a solution, I think that the best way it's include a cmake script on llamacpp to autodisable if processor don't support it at build generate stage.

About converter, days ago works fine for me, now fail with same error.

Give me some time, gona try to fix this too.

Ok, found it, the problem its caused by this commit:

ggerganov/llama.cpp@723dac5

I gona do the changes on gpt4all-ui and send pull request.

If you need it fast, navigate with CMD to gpt4all-ui/tmp/llama.cpp and then do a git checkout 0f07cacb05f49704d35a39aa27cfd4b419eb6f8d.

This revert the llama.cpp git to the last commit before the change of converter.

I was able to convert the model, but still crashing for me. How you use the pyllamacpp built binary into gpt4all-ui?

Thanks for finding out the good commit.

You can do that for now, but I am uploading at very low speed a working converted model. We'll remove the conversion step altogether. Nomic-AI gave me their concent to release my own model in the right format.

I'm cursed, the connection has crushed and my transfer is stopped. :(

Can any of you who has already a working converted model, push it to my hugging face space?
https://huggingface.co/ParisNeo/GPT4All/tree/main

Just do a pull request and I'll accept it.

git checkout 0f07cacb05f49704d35a39aa27cfd4b419eb6f8d

Hi, for now I have updated the scripts to use the checkout reference you sent. Thank you very much.

@ParisNeo I'm uploading it now to your huggingface repository, it's the first time that I upload model so I cross my fingers to do it fine.

Thank you very much.

Not needed to say thanks, I love to contribute with awesome projects ^^

A little update on AVX2=0.

So the model conversion is not needed, well, i used the one from my VM where i converted it, but it may aswell been donwloaded elsewhere and was converted already, i cant remember.

Either way the missing conversion script was not needed. THe model works.

But..

As with all good things, they dont last for ever. UI works to the point where you wait for the GPT to send you output but it never comes.

CPU usage skyrockets for few moments then it just gives up and gives nothing.

llama_generate: seed = 1681549441

system_info: n_threads = 8 / 8 | AVX = 1 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
sampling: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 40, repeat_penalty = 1.300000
generate: n_ctx = 512, n_batch = 8, n_predict = 287, n_keep = 0


 user: hello tell me a joke
gpt4all:

user: can cat be human?
gpt4all:


user: are you alive?
gpt4all:  [end of text]

llama_print_timings:        load time = 35956.38 ms
llama_print_timings:      sample time =     6.02 ms /     4 runs   (    1.51 ms per run)
llama_print_timings: prompt eval time = 64943.48 ms /    98 tokens (  662.69 ms per token)
llama_print_timings:        eval time =  1058.33 ms /     2 runs   (  529.16 ms per run)
llama_print_timings:       total time = 184629.24 ms

TL:DR its useless on CPUs without AVX2 :(