qwopqwop200/GPTQ-for-LLaMa

LoRa and diff with bitsandbytes

RonanKMcGovern opened this issue · 0 comments

  1. What changes would I need to make for GPTQ to support LoRa for Llama 2?
  2. What's the main difference between GPTQ vs bitsandbytes? Is it that GPTQ re-adjusts the weights to keep the same loss function shape?