CyberTimon/Powerpointer-For-Local-LLMs

Index out of bound errors

Closed this issue · 4 comments

Hi there,

First off, thank you for creating this...the potential I can see of it where I work is incredible!

However, I am seeing the same error frequently: File "/home/ec2-user/SageMaker/.cs/conda/envs/textgen/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 223, in forward
sincos = torch.gather(embed_positions, 1, repeated_position_ids)
RuntimeError: index 2048 is out of bounds for dimension 1 with size 2048

I am running this on AWS Sagemaker on a CPU backed instance (yet to try on a GPU, but will later on) and following your example with AI as the topic I am always greeted with this error. Any ideas?! I am able to run the textegeneration API running ok (I think):

(textgen) [ec2-user@ip-10-10-1-139 text-generation-webui]$ python server.py --api --model eleutherai_gpt-j-6b --model-dir ../text-generation-webui/models/ --verbose --cpu --load-in-8bit
/home/ec2-user/SageMaker/.cs/conda/envs/textgen/lib/python3.10/site-packages/torch/cuda/init.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
INFO:Loading eleutherai_gpt-j-6b...
INFO:Loaded the model in 25.50 seconds.

Help?
Cheers,
Dan

Hello @datadanb
It seems that this error is from text-generation-webui and the issue is not related to this repo.

But I see some issues which might cause the issue:

  1. GPT-j isn't a very good model for this task and the powerpoints won't be good.
  2. Try downloading some 4bit LLaMA models from huggingface in the ggml format and run them in text-generation-webui with llama.cpp. This will improve the performance very much and the powerpoints will actually generate because LLaMA is way more capable than gpt-j.
  3. When running this with the llama.cpp backend in text-generation-webui it will hopefully resolve the error's you're getting.

Refer to the documentation/repo from text-generation-webui for more info.

If none of this fixes your issue, open an issue in oobabooga text-generation-webui because he can better fix your issue.

Kind regards,
Timon Käch

Great, thanks @CyberTimon! Thanks for such a speedy response! Will take your advice and give it a shot :)

So I downloaded a LLaMA model from HF and the first try it's worked a treat, thanks again @CyberTimon. Issue fixed (albeit not with this repo!)

Nice to hear! Have fun and feel free to open another issue if things don't work how they should.