xlang-ai/batch-prompting
[EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.
Python
Stargazers
- baiyunping333guangzhou
- bellyfat
- BlankChengUC San Diego
- cchudantParis, France
- CeyaoZhangThe Chinese University of HongKong, Shenzhen
- changquanyou
- cnxupupup
- EssenceSentry
- ethanjyxklarity.ai
- fly51flyPRIS
- GanjinZeroDAMO Academy
- Gi-gigi
- gradetwo
- he0x
- HillZhang1999Bytedance
- hppRCJapan
- hundredeuk2Hanyang Univ. BIS Lab
- JeffCarpenterCanada
- JerryPeng21cuhk
- ju-resplandeFederal University of Goiás
- leesh6796KAIST
- lfy79001Institute of Automation, Chinese Academy of Sciences
- longxudouHarbin Institute of Technology
- lwaekfjlkCKC@ZJU -> LTI@CMU -> CS@UIUC
- moisutsuTakeda-Sasano Group
- nikitavoloboevTbilisi
- st01cs
- sustcsonglinMIT
- svjack
- TimothyxxxThe University of Hong Kong
- trojblueQuebec
- tyson-dowdMicrosoft
- vicgalleKomorebi AI & ICMAT-CSIC
- VPeterVThe Hong Kong University of Science and Technology (HKUST)
- wolegechuStepFUN
- zxlzrearth