Pinned Repositories
.github
do-not-answer
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
fairlib
A framework for assessing and improving classification fairness.
OpenFactVerification
Loki: Open-source solution designed to automate the process of verifying factuality
OpenRedTeaming
Papers about red teaming LLMs and Multimodal models.
Libr-AI's Repositories
Libr-AI/OpenFactVerification
Loki: Open-source solution designed to automate the process of verifying factuality
Libr-AI/do-not-answer
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
Libr-AI/OpenRedTeaming
Papers about red teaming LLMs and Multimodal models.
Libr-AI/fairlib
A framework for assessing and improving classification fairness.
Libr-AI/.github