How to find the model evaluation metrics that does not appear in the paper tables?
zhimin-z opened this issue · 2 comments
I found some metrics fail to show up in the two tables of the original paper. I wonder how I could access them. @zxx000728 @MikeGu721 @yixin-zhu
Where is alpaca-lora-7b:
There's no result of alpaca-lora-7b
As of our experimental phase in May 2023, we were unable to locate the "alpaca-lora-7b" model released by the original author on Huggingface. Instead, we found numerous "alpaca-lora-7b" models proposed by unofficial uploaders. By the time we proceeded with additional experiments in November 2023, the "alpaca-lora-7b" model was no longer widely used in benchmark studies.
Only have top 8 for each column:
Thanks for reminding us, we will upload the rest metrics within one week in this github repo.
Where is alpaca-lora-7b: There's no result of alpaca-lora-7b As of our experimental phase in May 2023, we were unable to locate the "alpaca-lora-7b" model released by the original author on Huggingface. Instead, we found numerous "alpaca-lora-7b" models proposed by unofficial uploaders. By the time we proceeded with additional experiments in November 2023, the "alpaca-lora-7b" model was no longer widely used in benchmark studies.
Only have top 8 for each column: Thanks for reminding us, we will upload the rest metrics within one week in this github repo.
Thanks for clarification!