hc20k/LLMChat

error loading model: failed to open models/llama/Drararara_llama-13B-ggml: Permission denied

Opened this issue · 4 comments

models that tried to load, same error
image
log
when_trying_to_request.txt

Same issue here...

hc20k commented

Try to put the .bin files directly into the models/llama directory, the script doesn't support walking through folders yet

Try to put the .bin files directly into the models/llama directory, the script doesn't support walking through folders yet

Hello! Thanks for respond!

We tried several models such as ggml-model-q4_0.bin by Drararara.

After a little more than a minute a simple request "Hi!" gives out nonsense:
image

We only checked the text chat. The bot is silent in the voice chat.
All of requirements was installed correctly.

System:

  • AMD Ryzen 9 5950X
  • 32GB RAM
  • Nvidia RTX3090 (24GB)
hc20k commented

DielynLandel

This may be due to the temperature / frequency penalty / presence penalty in the config, I find that a higher frequency penalty (1.1) is better for LLaMA models. Personally I use these settings for LLaMA:

  • temperature = 0.8
  • presence_penalty = 0.4
  • max_tokens = 0
  • frequency_penalty = 1.1

I've tested it with pygmalion-7b, wizardlm-7b/13b and this works pretty well.

It also helps LLaMA if you provide a short chat example in your initial prompt, like so:

...your initial prompt

{user_name}: Hello!
{bot_name}: Hi, how's it going today?
{user_name}: Fine, how about you?
{bot_name}: I'm doing well too.

Please let me know if this works out for you, if not I will be happy to help you out some more.