automatic-evaluation
There are 6 repositories under automatic-evaluation topic.
terryyz/ice-score
[EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code
hprodrig/MONSERRATE_Corpus
MONSERRATE is a dataset specifically created to evaluate Question Generation systems. It has, on average, 26 questions associated to each source sentence, attempting to be an “exhaustive” reference.
laihuiyuan/eval-formality-transfer
Multidimensional Evaluation for Text Style Transfer Using ChatGPT. Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer (HumEval 2022)
davidheineman/salsa
Success and Failure Linguistic Simplification Annotation 💃
johnny-brav0/AutomaticEvaluation
Automatic Evaluation of Textual Answers on the famous Kaggle Automated Essay Scoring (AES) dataset.
prathamSharma25/WebAES
An AI expert system to automatically evaluate subjective answers submitted in online assessments.