yanyu9385's Stars
kanekomasahiro/bias_eval_in_multiple_mlm
kanekomasahiro/evaluate_bias_in_mlm
MilaNLProc/honest
A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.
uclanlp/corefBias
To analyze and remove gender bias in coreference resolution systems
McGill-NLP/bias-bench
ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.
iPieter/biased-rulers
A survey of fairness in contextualized language models