/rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, and Vicuna)

Primary LanguagePythonApache License 2.0Apache-2.0

RankLLM

PyPI Downloads Downloads Generic badge LICENSE

We offer a suite of prompt decoders, albeit with a current focus on RankVicuna. Some of the code in this repository is borrowed from RankGPT!

Releases

current_version = 0.0.7

📟 Instructions

More instructions to be added soon!

🦙🐧 Model Zoo

The following is a table of our models hosted on HuggingFace:

Model Name Hugging Face Identifier/Link
RankVicuna 7B - V1 castorini/rank_vicuna_7b_v1
RankVicuna 7B - V1 - No Data Augmentation castorini/rank_vicuna_7b_v1_noda
RankVicuna 7B - V1 - FP16 castorini/rank_vicuna_7b_v1_fp16
RankVicuna 7B - V1 - No Data Augmentation - FP16 castorini/rank_vicuna_7b_v1_noda_fp16

✨ References

If you use RankLLM, please cite the following paper: [2309.15088] RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models

@ARTICLE{pradeep2023rankvicuna,
  title   = {RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models},
  author  = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
  year    = {2023},
  journal = {arXiv preprint arXiv: 2309.15088}
}

🙏 Acknowledgments

This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.